Department: Biomedical Engineering
Name: Stephen Branch
Date Time: Tuesday, November 12th, 2024 - 10:00 a.m.
Advisor: Dr. Dana Spence
With over 10.5 million units of red blood cells (RBCs) transfused in 2021 in the United States alone, blood transfusions are one of the most common hospital procedures. These life-saving interventions are necessary to treat a variety of conditions that result in decreased hemoglobin levels. Common causes include anemia, hemoglobinopathy, cancer, chemotherapy, radiotherapy, and blood loss from trauma or major surgeries. Despite centuries of research into the storage of RBCs for transfusion, current methods cannot prevent degradation of these cells. Detrimental biochemical and physical changes occur after even short periods of storage. This collection of harmful storage-induced changes is known as the storage lesion. The storage lesion can be broadly categorized into oxidative damages and metabolic impairments. Oxidative damages include generation of reactive oxygen species, lipid and protein oxidation, and degradation of cellular structure leading to severe morphological changes. Metabolic impairments lead to accumulation of lactate, acidifying the cellular milieu, and decreases in adenosine triphosphate (ATP) and 2,3-diphosphoglycerate levels. Transfusion of RBCs significantly impacted by the storage lesion raises questions of patient safety. Though contemporary transfusion medicine has mitigated clinical complications from these procedures, they are not without risk. Complications range from transfusion-transmitted infections to fatal acute reactions, such as transfusion-associated circulatory overload. Minimizing the number of transfusions required is a key objective in blood banking research. This may be achieved by improving the efficacy of transfusions through reduction of the storage lesion. This is addressed in this work through the use of modified additive solutions, which are used to prolong viability of RBCs in storage, and investigation of post-storage cellular rejuvenation. The additive solutions used today contain extreme amounts of glucose, ranging from 45 mM to 111 mM. Such hyperglycemic conditions have been implicated in the development of various aspects of the storage lesion. Previous reports have demonstrated that a normoglycemic additive solution containing just 5.5 mM glucose is effective in reducing oxidative stress and osmotic fragility in stored RBCs as well as increasing ATP release and cellular deformability. As glucose is metabolized throughout storage, an RBC feeding system was previously developed to automate maintenance of normoglycemic conditions. However, aspects of the design of this system limited experimental control and regulatory compliance, and therefore translational potential. Here, a second-generation RBC feeding system is developed and employed in additional normoglycemic RBC storage studies. Beyond validating the performance of this system, novel benefits of normoglycemic storage such as reduced cellular glycation and hemolysis are confirmed. Expanding on previous studies, rejuvenation of RBCs stored under these conditions via post-storage washing is investigated. This rejuvenation results in significant improvements to the health of stored RBCs; both cellular deformability and morphology were consistently restored to near-normal.
Department: Biomedical Engineering
Name: Vittorio Mottini
Date Time: Tuesday, August 27th, 2024 - 11:00 a.m.
Advisor: Prof. Jinxing Li
The rapid advancement of wearable technology has introduced a new era of human-machine interaction, with soft bioelectronics emerging as a novel field at the intersection of materials science, electrical engineering, and healthcare. Soft bioelectronics offers unprecedented opportunities for seamless integration with the human body, promising to transform personal health monitoring, medical diagnostics, and human-machine interfaces. These flexible and stretchable electronic systems conform to the complex topography of human skin, adapting to its constant motion and deformation while minimizing mechanical stress on tissues. This adaptability enables long-term, comfortable wear for continuous physiological monitoring, advanced prosthetic control, or novel human augmentation, overcoming the limitations of rigid electronic systems. Despite significant progress, challenges persist in developing skin-interfaced electronics that maintain high performance across diverse skin conditions and age groups. This dissertation presents the development and evaluation of "InSkin," an innovative, inclusive skin-interfaced electronic platform designed for high-fidelity, high-density, multi-channel electrophysiological recording. The InSkin technology addresses critical challenges in current skin-interfaced electronics, particularly the variability in signal quality across diverse skin conditions and age groups. A novel conductive polymer composite, Solution WGP, was engineered to create a conformal, stretchable interface that adapts to various skin morphologies. This material demonstrated exceptional mechanical properties, maintaining electrical functionality at up to ~1200% strain when supported strain while achieving a 93.18% reduction in electrode-skin impedance compared to commercial electrodes. Comprehensive characterization studies revealed InSkin's superior performance across different skin types. The device maintained 80.65% of its signal amplitude on wrinkled skin compared to smooth skin and 100% on hairy skin compared to shaved skin. Long-term stability tests showed 75% signal quality retention after 24 hours of continuous wear. High-density surface electromyography (sEMG) mapping capabilities were demonstrated using a 32-channel array with 12 mm inter-electrode spacing. This enabled detailed visualization of muscle activity patterns, including motor unit action potential propagation and innervation zone identification, showcasing potential applications in neuromuscular research and personalized rehabilitation. Advanced gesture recognition algorithms integrated with the InSkin platform achieved 97.7%accuracy in classifying ten hand gestures, significantly outperforming commercial electrodes. This performance was consistent across age groups, with only a 4% reduction in accuracy for older participants. The system's efficacy was further validated through successful integration with a prosthetic hand prototype, demonstrating the potential for intuitive, high-precision control.
Department: Biomedical Engineering
Name: Evran Ural
Date Time: Thursday, August 22nd, 2024 - 9:30 a.m.
Advisor: Chris Contag
Many conditions of chronic inflammation, such as ulcerative colitis, predispose an individual to developing cancer. The predisposition of chronically inflamed tissue to neoplasia and malignancy is referred to as immunocarcinogenesis. Colitis is characterized by relapsing episodes of inflammation and ulceration in the colonic mucosa. Macrophages play an important role in regulating the immune response in colitis, and secrete proinflammatory factors that may promote colitis-associated cancer. Extracellular vesicles (EVs) have been shown to mediate colitis and colon cancer progression, and there is accumulating evidence suggesting that the activation states of macrophages influence EV secretion and signaling effects in inflammation and cancer. Macrophages in the ulcerated colonic submucosa are exposed to increased levels of bacterial endotoxins, so we sought to model EVs from colitis in culture using EVs from lipopolysaccharide (LPS)-activated macrophages. To investigate the impact of EVs from macrophages on mediating colitis-associated cancer, we characterized EVs from LPS-activated macrophages, treated colon cells and tumors with isolated macrophage EVs, and analyzed the inflammatory and protumorigenic effects in vitro and in vivo. Our results provide evidence that EVs released from LPS-activated macrophages increase inflammation in the colonic epithelium, can promote cell growth, lead to anchorage-independent growth, and induce protumorigenic protein expression in transformed cells, and significantly alter the local immune environment. These findings have implications on the origins and progression of colitis-associated malignancy.
Department: Biomedical Engineering
Name: Daniel Marri
Date Time: Tuesday, May 14th, 2024 - 1:00 p.m.
Advisor: Prof. Sudin Bhattacharya
Circadian clocks are intrinsic molecular oscillators present in cells across prokaryotes and eukaryotes that synchronize physiological processes with external cues, enabling organismal adaptation and survival. These clocks regulate crucial biological functions, including sleep-wake cycles, thermoregulation, hepatic metabolism, and hormonal secretion, through the rhythmic expression of clock-controlled genes. Perturbations in the circadian clock network can contribute to the pathogenesis of various disorders, such as obesity, diabetes, inflammatory conditions, and certain cancers. To understand the effect of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) on the spatial and temporal dynamics of the circadian clock genes, interpretable machine learning models were developed to predict BMAL1 binding to DNA in liver, kidney, and heart tissues using genetic and epigenetic features (binding sequence, DNA shape, and histone modifications). Furthermore, a spatiotemporal multicellular mathematical model of the mammalian circadian clock in the liver lobule was developed to investigate intercellular coupling for the synchronization of circadian clock expression across the portal-to-central axis. Lastly, to understand the interplay between the spatial and temporal axes of gene expression in the liver, particularly in drug metabolism pathways, non-linear mixed effect models were developed to analyze the acute effect of TCDD on the spatial temporal expression of genes in the hepatic lobule.
These findings provide a comprehensive examination of circadian rhythms and their disruption by TCDD in the liver, encompassing molecular mechanisms, predictive modeling, and spatiotemporal dynamics. Also, the study offers valuable insights into the intricate regulatory mechanisms governing circadian rhythms, the significance of zonation in hepatic functions, and the interplay between spatial and temporal gene expression. Taken together, our findings have the potential to contribute significantly to our understanding of circadian resilience and the mitigation of pathological conditions, particularly in the context of drug metabolism pathways and hepatic function.
Department: Biomedical Engineering
Name: Logan Soule
Date Time: Monday, April 8th, 2024 - 9:00 a.m.
Advisor: Prof. Dana Spence
Red blood cell (RBC) transfusions are life-saving procedures for a wide variety of patient populations, resulting in nearly 30,000 transfusions each day within the United Sates. However, transfusions can also result in complications for patients, including inflammation, edema, infection, and organ dysfunction. These poor transfusion outcomes may be related to irreversible chemical and physical damages that occur to RBCs during storage, called the “storage lesion”. These damages, including diminished ATP production/release, decreased deformability, increased oxidative stress, and increased membrane damage, may result in poor functionality when transfused. The damage that occurs during storage may be due to the hyperglycemic nature of current anticoagulants and additive solutions used for RBC storage. All FDA approved storage solutions contain glucose at concentrations that are over 8x higher than the blood stream of a healthy individual. Previous work has already shown that storing RBCs at physiological concentrations of glucose (4-6 mM), or normoglycemic conditions, resulted in the alleviation of many storage-induced damages, including an increase in ATP release, increased deformability, reduced osmotic fragility, and decreased oxidative stress. However, this storage technique was also accompanied by many limitations in its translation to clinical practice. The manual feeding of glucose to normoglycemic stored RBCs to maintain physiological levels of glucose introduced both a breach in sterility and unreasonable labor requirements that could not be translated to clinical practice. Additionally, the low-volume storage (< 2 mL) method with custom PVC bags used in previous work may not illicit similar benefits when scaled up to larger volumes with commercially available blood collection bags.
This work overcame these limitations through the design and implementation of an autonomous glucose delivery system that maintained normoglycemia of stored RBCs completely autonomously for 39 days in storage, while also maintaining sterility. This system was then used to store RBCs under normoglycemic conditions and monitor key storage lesion indicators, resulting in reduced osmotic fragility, decreased oxidative stress, and reduced morphological changes. There was also no impact on glycolytic activity or hemolysis levels, improving upon previous work which reported significant hemolysis that surpassed the FDA threshold of 1%. These data solidify and improve upon previous results, indicating that normoglycemic RBC storage results in reduced damages in storage that may translate to better in vivo function. The autonomous glucose delivery system also significantly advances the applicability of the normoglycemic storage technique to clinical practice, making large scale studies now possible. Additionally, a novel rejuvenation therapy was investigated, highlighting the capability of albumin, an abundant plasma protein, to reverse the membrane damages seen during RBC storage, resulting in RBCs closer in shape and size to that of fresh RBCs.
Department: Biomedical Engineering
Name: Meghan Hill
Date Time: Wednesday, March 6th, 2024 - 12:00 p.m.
Advisor: Taeho Kim
Glioblastoma is one of the most aggressive and invasive types of cancer. Unfortunately, due to the overlapping nature of side-effects with other types of neurological diseases and the difficulty to identify them with diagnostic measures, it is not discovered until stage four. At this point, patients have limited options for care and ultimately end up in palliative care not long after diagnosis. The blood-brain barrier (BBB) has proved to be a difficult boundary for current modern medicines as it prevents adequate accumulation within the brain. As gliomas often form in inoperable parts of the brain, conventional FDA-approved therapies prove to be ineffective. Within the past ten years, targeting strategies using RGD peptides have proven effective at transporting drugs, contrast agents, or nanoparticle delivery vehicles across the barrier, but suffer from off-targeting effects due to expression of the peptide-recognizing integrins on the surface of healthy cells. Extracellular vesicles, particularly exosomes, have shown promising specific targeting effects of cells from which the vesicles originate. They have also shown a remarkable ability to pass through the BBB innately. The focus of this project was the development of a glioblastoma derived-exosome coated Prussian Blue nanoparticle (Exo:PB) that could easily accumulate within glioblastoma tissues and provide enhanced diagnostics as well as localized therapy. Prussian Blue nanoparticles are FDA-approved for scavenging heavy metals present within the body after extreme radiation exposure. Based on their exceptional application to photothermal therapy and ability to be used for photoacoustic imaging and MRI, they are an ideal candidate for glioblastoma theranostics. By investigating the distribution and accumulation patterns of these newly developed Exo:PB nanoparticles within preclinical mouse models, earlier diagnosis and treatment intervention can be achieved for glioblastoma.
Department: Biomedical Engineering
Name: Sarah Wright
Date Time: Wednesday, December 13th, 2023 - 9:00 a.m.
Advisor: Michele Grimm
Translational applications of biomedical engineering, including work to understand and reduce the risk of injuries, can involve both experimental work – in the laboratory or clinical settings – and computational modeling. This biomechanical project was conducted as part of a larger effort to understand birth-related injuries to the neonatal brachial plexus – a complex set of nerves that begins from the cervical (C5-C8) and thoracic (T1) nerve roots. During both vaginal and cesarean births, these nerves are susceptible to an injury known as Neonatal Brachial Plexus Palsy (NBPP). Conducting clinical or experimental injury analysis of NBPP is challenging due to the vulnerable population involved – infants. The use of computational modeling allows the exploration and analysis of this nerve complex to investigate the effect of maternal and neonatal parameters on brachial plexus stretch during the birth process. To date, there are no anatomically accurate adult or neonatal brachial plexus models published. An anatomically accurate finite element model (FEM) has been developed that will allow in-depth analysis of NBPP injuries by providing a better understanding of stress distribution within the nerves. In this dissertation project, the model was developed, validated, and utilized to provide insight into the progression of injury when force is applied. The outcomes of this project have advanced both computational modeling and knowledge regarding brachial plexus injury in neonates. We anticipate that our novel, three-dimensional neonatal brachial plexus model will be available in the future to simulate and study specific brachial plexus injuries (Erb’s Palsy, Klumpke’s Palsy, etc.) and to further investigate patterns of injury in NBPP. The current and future applications of the model will provide useful insight for researchers, neurosurgeons, and other medical professionals to scientifically evaluate biomechanical aspects of neonatal brachial plexus injuries – in the hope of providing useful insight into ways to lessen the chances of these injuries occurring.
Department: Biomedical Engineering
Name: Cort Thompson
Date Time: November 7, 2023 - 3:00 PM
Advisor: Erin Purcell
A SYSTEMATIC CHARACTERIZATION OF THE TISSUE RESPONSE IN THE BRAIN TO IMPLANTED ELECTRODE ARRAYS
Intracortical brain interfaces are an ever-evolving technology with growing potential for clinical and research applications. However, the tissue response to implanted devices can limit their functional longevity. The chronic tissue response to these devices has been typically characterized by glial scarring, inflammation, oxidative stress, neuronal loss, and blood brain barrier disruptions. To ameliorate or circumvent the tissue response, numerous next generation electrode which feature various biomaterials and novel designs have been developed with some success. However, recordable neuronal signals can still decline in apparently healthy tissue which present with minor glial scarring and normal neuronal densities. Therefore, it is essential that we better understand the tissue response to inform and guide the design of cortical implants with greater biocompatibility. Recent RNA seq datasets have identified hundreds of gene associated with gliosis, neuronal function, myelination, a nd cellular metabolism which are spatiotemporally expressed in neural tissues following the insertion of microelectrodes. There is also evidence to suggest that these differentially expressed genes may be spatiotemporally expressed at the protein level across multiple cell types in the cortical environment. These new understandings of the broader tissue response at the transcriptional level may now allow for more targeted interrogation of the biological pathways involved in the tissue response. By understanding the tissue response of the brain, it may be more possible than ever to precisely identify the biological mechanisms that impact device performance and guide the creation of more biocompatible neural interfaces.
Department:
Biomedical Engineering
Name:
Alesa Netzley
Date Time:
Monday, October 16, 2023 - 10:00am
Location:
1404 Interdisciplinary Science and Technology Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Galit Pelled
Traumatic brain injury (TBI) is a leading cause of death and disability among children and adolescents in the United States. An estimated 90% of head-injury-related emergency department visits result in a diagnosis of mild TBI (mTBI) also known as concussion. Historically ignored as a major public health concern, concussion can cause lasting neurocognitive changes that can persist for years or even decades; well beyond the typical 2-week clinical recovery period. Postconcussive syndrome (PCS) encompasses a constellation of cognitive and physiological symptoms that continue to occur weeks, months, or years after a concussion. In children and teenagers, these impairments can disrupt an individual’s developmental trajectory, leading to underperformance in academics, poor integration into the workforce, and diminished quality of life in adulthood. Preclinical neuroscience has greatly improved our understanding of the consequences of head injury, however vast architectural differences between rodent and human brains has resulted in dismal translation of therapeutic strategies from the bench to the bedside. In recent decades, the domestic pig (sus scrofa) has attracted considerable attention as a highly promising model animal for studying age-specific responses to mechanical trauma due to striking similarities between pig and human brain anatomy, development, and neuroinflammatory response. To add to the growing body of work utilizing pigs for the study of brain injury, we have developed a model of pediatric concussion in juvenile Yucatan miniature pigs. We conduct an extensive battery of cognitive and behavioral assessments designed to reveal post-concussive complication in pigs. We also conduct clinically relevant live imaging procedures to better understand the effects concussion can have on brain connectivity and function. The utilization of an animal model whose neuroanatomy closely resembles the human brain is critical to the development of therapeutic protocols that are effective and safe.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Ethan Tu
Date Time:
Thursday, August 3, 2023 - 10:00am
Location:
1404 Interdisciplinary Science and Technology Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Adam Alessio
Artificial intelligence (AI) has evolved immensely in recent years, with AI achieving human levels of performance on a wide variety of tasks. However, AI has had limited adoption in clinical settings despite its promising prediction, classification and pathology detection applications. For a machine learned (ML) model to train effectively, the observed data must be a diverse, accurate representation of the true distribution. Therefore, to properly estimate the true distribution, extremely large datasets become necessary. In healthcare scenarios, datasets of sufficient size may be rare or absent, thus hindering the training of ML models. One of the ways to mitigate this problem is through data augmentation, where we supplement our datasets with slightly modified copies of already existing data or newly created synthetic data. Recently, sophisticated data augmentation methods are based on a class of neural networks (NNs) called Generative Adversarial Networks (GANs), which generate new images of high perceptual quality. This dissertation describes the design and development of a new type of GAN, named near-pair patch cycleGAN (NPP-cycleGAN), which generates realistic pathology-present images. Here, we train and test this network using pediatric chest radiographs. We demonstrate that the proposed GAN can generate high quality fracture-present pediatric chest radiographs. With the addition of these synthetic images to an object detector’s training dataset, we are able to improve the fracture detection performance. These results suggest that our proposed method can be applied to other pathology detection tasks and could potentially enable improved object detector performance in multiple clinical scenarios.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Biomedical Engineering at 884-6976 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Kylie Smith
Date Time:
Thursday, July 20, 2023 - 9:00am
Location:
1404 Interdisciplinary Science and Technology Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Kurt Zinn
Molecular imaging is a critical tool for the management of neurodegenerative disease. In particular, positron emission tomography (PET) has provided new ways to identify distinct subtypes in Alzheimer’s Disease, inform disease management, and monitor treatment progress. However, the power of PET imaging is challenged by limitations to accessibility that hinder its adoption. Opportunities to reduce risk of failure and improve the efficiency of PET research are of high priority, given the high costs of conducting a PET study and the urgent need for improved imaging techniques and interventions. This dissertation describes the design, development, and implementation of custom research tools to improve efficiency for pre-clinical PET imaging. A modular multi-rodent imaging bed was designed and validated for high throughput PET/MR, then de-risked for commercialization. Commercialization activities included evaluation of candidate materials for interference in pre-clinical imaging modalities, a value-in-use study, and incorporation of desirable features identified through informational interviews with end users. Anatomically derived 3D-printed phantoms were used to develop methods to track nose-to-brain transfer by PET, which were then applied in nonhuman primates. Using this approach, we were able to sensitively quantify the distribution of F-18-FB-insulin throughout the brain of Cynomolgus Macaques following nose-to-brain delivery. Clinically relevant dosing tools were prioritized to facilitate rapid translation to humans for evaluation of nose-to-brain insulin as a therapeutic for Alzheimer’s Disease. Together, these methods are hoped to reduce barriers to participation in PET neuroimaging research and help scientists get started or get farther with the resources they have available.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Biomedical Engineering at 884-6976 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Manoj Madhavan
Date Time:
Tuesday, May 2, 2023 - 11:00am
Location:
1404 ISTB Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Ripla Arora
During morphogenesis 2D epithelial tissue undergo architectural changes to form 3D structures called folds. Folding is a key phenomenon during embryogenesis and organogenesis and, is essential for several physiological functions. For example, folds in the stomach (rugae) and intestine (crypts) increase surface area for nutrient absorption and in the brain (gyri) increase cortical surface area for neural processing. The uterine luminal epithelium in mammals including humans, horses, and rodents, undergoes structural changes to form folds. Although improper uterine folding in horses results in pregnancy failure, the precise role of folds in embryo implantation remains unknown. Using 3D imaging and 3D reconstruction of the mouse uterus, we uncover dynamic changes in the luminal folding pattern. We show that in a healthy pregnancy, the uterus forms transverse folds prior to embryo implantation. Using models of aberrant uterine folding, we show that longitudinal folds lead to embryo-uterine axes misalignment and abnormal chamber formation. Further, we show that increased estrogen signaling and reduced progesterone signaling lead to aberrant longitudinal folds. Finally, we extend our findings to examine the effects of excess estrogen signaling on folding during hyperstimulation – a clinical procedure performed during In Vitro Fertilization (IVF) to increase egg numbers for higher success rate of implantation and pregnancy. In women, pregnancies following hyperstimulation often lead to preterm birth, placental abnormalities, and other complications. Our findings suggest that hyperstimulation in mice leads to pregnancy loss due to aberrant folding. Our research can be potentially used to improve pregnancy outcomes following IVF and fresh embryo transfer. In addition to fueling future research on endometrial folds in humans, our research will open up new avenues for the treatment of infertility and provide new targets for diagnosis based on uterine 3D structure.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Biomedical Engineering at 884-6976 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Harvey Lee
Date Time:
Wednesday, April 5, 2023 - 2:00pm
Location:
1404 Interdisciplinary Science and Technology Building & Zoom
Announcement:
ABSTRACT
Advisor: Prof. Assaf Gilad
The use of synthetic biology to carry out functions achieved through conventional means are often met with higher performance and cheaper production costs, such as the modern development and production of recombinant human insulin in E. coli. This dissertation focuses on utilizing synthetic biology to access the versatility of proteins and unique features of Rare Earth Elements (REEs) for molecular imaging and biomedical engineering. REEs are an essential resource for modern technology – anything with a screen, lens, glass, lights, magnets, steel alloys, or batteries require the use of REEs. In addition to their properties in magnetism, chemical reactivity, and temperature durability, REEs are also heavily utilized for their unique spectroscopic properties, making them crucial for almost every sector of industry, as well as molecular imaging assisted diagnostics. Current methods for mining REEs involve the extensive use of harsh chemicals and intense labor, not to mention low yields and excessive byproducts. Moreover, not only do REEs exist on Earth in a finite amount, but their mining and distribution is alarmingly reliant on very few sources, leaving the availability of such resources vulnerable to unforeseen circumstances. It would therefore be beneficial for nations to develop REE recycling technology with higher yields, lower costs, and environmentally friendlier methods. Herein, motifs found in nature were integrated into newly designed synthetic proteins to enable REE binding, leading to applications in bioremediation and theranostics.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Biomedical Engineering at 884-6976 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Alexander Bricco
Date Time:
Wednesday, March 22, 2023 - 10:00am
Location:
1404 Interdisciplinary Science and Technology Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Assaf Gilad
Reporter genes are important tools for researchers studying molecular and cellular biology as they give location and measurable values to the expression level of a given gene. Reporter genes for MRI, allow these functions to be done at arbitrary tissue depth and noninvasively. Chemical Exchange Saturation Transfer (CEST) based reporter genes have shown promise in acting as reliable reporters in MRI but the relatively low sensitivity to the method has decreased its utility in research situations. Initial attempts to optimize existing CEST reporter genes proved difficult due to a series of technical challenges lead to developing a process where iterative machine learning and experimentation were used to develop CEST reporter genes that produce nearly a fourfold increase in contrast over prior art. Additionally POET is used to generate a reporter gene that produces significant contrast at a farther downfield frequency than prior CEST reporter genes.
Email sandra@msu.edu for Zoom information
Department:
Biomedical Engineering
Name:
Victoria Avery Toomajian
Date Time:
Friday, February 24, 2023 - 10:00am
Location:
1404 ISTB Building and Zoom
Announcement:
ABSTRACT
Advisor: Prof. Chris Contag
Delivery tools such as viral vectors, lipids, liposomes, polymers, polymeric micelles, inorganic nanoparticles, and extracellular vesicles have been studied for targeted therapeutic delivery. A number of these have been approved by the Food and Drug Administration for treatment of disease and many are currently being investigated in clinical trials. Extracellular vesicles (EVs) are an emerging therapeutic delivery tool based on their ability to be naturally taken up by cells, low immunogenicity, and potential for inherent targeting ability. EVs are small membrane bound particles released by cells and are considered to be a naturally occurring method of cell-to-cell communication. The targeting ability of EVs has been demonstrated using tumor cell-derived EVs that show increased uptake in tumors and tumor cells. In addition, EVs from immune cells have been used to target areas of inflammation, and one potential benefit of using EVs is that tracking studies have shown that EVs cross tissue barriers in vivo. EVs have been tracked by common imaging modalities, all of which rely on labeling the EV with a modality-specific tracer, such as inorganic nanoparticles, fluorescent dyes, bioluminescent or fluorescent proteins, or radioactive tags. One of the emerging imaging methods for tracking EVs in vivo is magnetic particle imaging (MPI), which uses superparamagnetic iron oxide nanoparticles (SPIOs) as the tracer. Once labeled with SPIOs, EVs can be tracked in vivo with MPI, which offers the significant advantages of being sensitive and directly quantitative. Development of EVs as a therapeutic delivery tool can be enhanced through imaging, and here I evaluate this for primary cancer and metastasis as well as cardiovascular disease.
Department: Chemical Engineering and Materials Science
Name: Ashiq Shawon
Date Time: Monday, November 25th, 2024 - 11:00 a.m.
Advisor: Dr. Alexandra Zevalkink
The crystal structure and bonding characteristics of intermetallic compounds critically influence their thermal and elastic properties. Polymorphic phase transitions – where crystal structures transform without altering atomic composition – offer a unique window to directly probe the relationship between atomic arrangement and thermal properties. Intermetallic Zintl compounds provide an intriguing case study, as they exhibit both ionic and covalent bonding frameworks within a single crystal lattice. Within the AMX Zintl family (where A = alkali-metal or alkaline earth metal, M = transition metal, X = non-metal), a series of closely related crystal structures feature a covalent sublattice that transitions progressively from a two-dimensional (2D), graphene-like configuration to a fully interconnected three-dimensional (3D) network. By examining these crystallographic transitions, we uncovered concrete correlations between the dimensionality of covalent bonding and the resulting thermal properties.
We first explored the changes in thermal transport properties in the compound YbCuBi, as its crystal structure transitions from a flat 2D covalent sublattice to a buckled quasi-2D covalent network with periodic interlayer interactions. Using a combination of resonance ultrasound spectroscopy, inelastic neutron scattering, and first-principles calculations, we studied the impacts of this crystallographic transition on acoustic and optical phonons. Thermal conductivity measurements elucidated how changes in phonon energies and elastic behavior impact the thermal transport characteristics. Building on this, we also investigated a quasi-2D to 3D covalent phase transition in the CaAgSb1-xBix solid solution. Isoelectronic substitution of Sb by Bi systematically alters the elastic properties, while the crystallographic transition induces a ‘step-like’ change. Lattice parameters from x-ray diffraction reveal the underlying mechanism of the phase transition, while resonance ultrasound spectroscopy elucidates its impact on elastic properties. We also explore the limitations of Weidemann-Franz approximation for heat transport by charge carriers, highlighting the boundaries in our current understanding of thermal transport by quasiparticles.
In the process, we discover promising thermoelectric properties in CaAgSb, attributed to its low thermal conductivity and high electronic mobility. According to the single parabolic band model, reducing carrier concentration could potentially enhance the thermoelectric performance of CaAgSb. Therefore, we conduct a ‘phase-boundary mapping’ study, combining first-principles density functional theory calculations and experiments to elucidate the behavior of carrier-generating defects under different growth conditions. We identify Ag vacancies as the defects with the lowest formation energy under all growth conditions, limiting Fermi level tuning to a narrow window, and suggesting that other routes may be needed to optimize the thermoelectric efficiency of CaAgSb.
Department: Chemical Engineering and Materials Science
Name: Demetrios A. Tzelepis
Date Time: Tuesday, November 19th, 2024 - 12:00 p.m.
Advisor: Dr. Lawrence Drzal
Lightweighting of automotives and ground vehicles has facilitated the use of a wide range of materials such as high strength aluminum, advanced high strength steel alloys, ceramics along with various composites based on the desired application. This inevitably leads to multi-material joints, where fusion welding process is not possible. Adhesive bonding offers an alternative to fusion welding for mixed or multi-material joints. In military ground vehicle applications, these types of multi material joints not only undergo quasistatic and fatigue loading but also high strain rate events such as a mine blast and ballistic penetration. Adhesives that exceed 10.0 MPa in shear strength and a displacement failure greater than 3.81 mm are classified as ‘Group-1 adhesives’ as they exhibit excellent stiffness-toughness balance which are need for high strain rate applications.
Polyurethanes (PU), polyureas (PUa) and their intermediates poly(urethane-ureas) (PUU) represent an industrially important and versatile class of polymers used in coatings, sealants, and adhesive applications. In terms of defense industrial applications, PUa’s have been used is explosion (blast) resistant coatings that can suppress the rupture of thick steel plates or the spallation of masonry structures by dissipating shock wave energy. The reason for their versatility comes from their structure and morphology which are comprised of hard segments and soft segments. Depending on hard segment content and soft segment length PU and PUa can range from a hard and brittle (high hard-segment content) to soft and elastomeric (low hard-segment content). In other words, they can be tailored to have a balance of stiffness and toughness and may be a good choice for adhesive applications. In PU and PUa the hard and soft segments can separate and form a percolated hard phase in a soft phase matrix. Additions of nano particles such as graphene nanoplatelets (GnP) to PU and PUa give an additional microstructural dimension to PUa and PUa. The effect of adding GnP to the adhesive, quasistatic fracture, and viscoelastic properties of PUa is not fully understood and is a necessary first step in tailoring PUa formulation for adhesive application and understanding their high strain properties.
In this work a multidisciplinary approach (experimental and modeling) is used to elucidate the effect of the GnP on the processing (chemistry), structure (phase separation), and properties (quasi-static, and viscoelastic) relationship of PUa based nanocomposite. A model polyurea with hard segment weight fraction (HSWF) of 20, 30 and 40 percent was developed to explore the combined effect of HSWF and nano-additions of 0.5, 1.0 and 1.5 weight percent GnP on the quasi-static and viscoelastic properties. For model PUa formulation with higher HSWF the additions of GnP on quasi static tensile and viscoelastic properties were negligible but at lower HSWF some improvement was seen in the viscoelastic properties and simultaneously with improvement in strength and ductility was seen. Despite the complexity of the phase separated microstructure of the PUa and the nanocomposite, time-temperature (TTS) super position was shown to be valid for both the neat PUa’s and the PUa-GnP nano composite. Although the TTS shifts didn’t fit Arrhenius nor the WLF models, they did fit a more recently developed two-state, two-(time) scale model. Furthermore, a micro mechanical model utilizing fractional calculus-based modeling showed excellent correlation between the experimentally obtained TTS curves and the mechanical modeling for both neat and composite PUa. The micromechanical model developed utilizes a few physical properties such as modulus and relaxation time to predict material viscoelastic behavior instead of the conventional Prony series which has a large number of parameters with no relation to material properties. The micro-mechanical model parameters were evaluated at various nano-loading and hard segment weight fraction which showed that the effect of GnP was significantly less pronounced than the effect of HSWF.
In addition, Single Lap Joints were used for an initial exploration of multiple formulations from both a chemistry perspective (changing isocyanate type and diamine type) and a microstructural perspective (weight fraction of hard segments). Results indicate that Group-I adhesive with cohesive failures can be achieved with PUa, showcasing the potential of PU as an adhesive.
Overall, this work supports the feasibility of utilizing PUa’s in adhesive applications. The detailed characterization of PUas with varying HSWF and GnP content shows that the HSWF had a far greater effect on the properties of PUa than the additions of GnP. GnP did not have adverse or detrimental effects on the performance of the PUa. Future work can explore the advantage of GnP in creating multifunctionality to PUa such enhancing thermal and electrical conductivity. At the same time, GnP showed significant improvement in PUas with low HSWF creating a wide range of potential applications for PUa based bonded joints. Future work should also explore the high strain rate behavior of the PUa bonded joints.
Department: Chemical Engineering and Materials Science
Name: Shalin Patil
Date Time: Tuesday, November 12th, 2024 - 1:00 p.m.
Advisor: Dr. Shiwang Cheng
Hydrogen bonding (H-bonding) is omnipresent such as in DNA, RNA, proteins, and water. The highlighting features of the hydrogen bonding interactions are their directionality and reversibility: the H-bonding has a bond angle between 135o to 180o and they are relatively weak and can break and recombine at experimental time scales. Despite the wide acknowledgment of the directionality and reversibility of H-bonding interactions, their influences on molecular dynamics and macroscopic properties, such as flow or viscosity, have been far from revealed. In this dissertation, we focus on one of the simplest types of H-bonding liquids: Monohydroxy Alcohols (MA) to show how the H-bonding interactions affect the supramolecular structures formation, the supramolecular dynamics (including the Debye relaxation process), and the relationship between these supramolecular structures and viscosity. In particular, we have employed a new experimental testing platform, the rheo-dielectric spectroscopy, that reveals: (i) An interesting relationship between the structural relaxation time, t_α, and the Debye time, t_D, with t_D^2/t_α following an Arrhenius temperature dependence; (ii) The presence of an intermediate relaxation process with characteristic time, t_m, between t_α, and t_D of MAs that is both dielectric and rheology active; (iii) t_m agrees excellently with hydrogen bonding exchange time of MAs from NMR measurements. These observations inspire new theoretical development, i.e. the living polymer model (LPM) (Figure 1), which enables a coherent explanation for a wide range of molecular parameters on the supramolecular structures and dynamics of monohydroxy alcohols, including the roles of molecular architecture, the alcohol types, and the dilution. The results have helped clarify several concepts in the current understanding of the dynamics of H-bonding liquids, such as the supramolecular chain breakup time, the average supramolecular chain size, and the H-bonding lifetime of MAs.
Department: Chemical Engineering and Materials Science
Name: Sabrina J. Curley
Date Time: Tuesday, November 5th, 2024 - 11:30 a.m.
Advisor: Dr. Caroline R. Szczepanski
Surfaces are how bulk materials interact with the world, and in nature, how organisms interact with their environment. As such, multiple approaches in water-securing research take inspiration from unique water-surface interactions observed in numerous plants and animals that address scarcity. Traditional strategies used for surface formation (e.g. photolithography, block copolymer assembly, additive manufacturing, and machining) have certain limitations, including the need for multiple processing steps or specialized equipment, patterned length scale restrictions, as well as requirements for niche chemical precursors. These limitations have associated costs in terms of time, energy, and resources, and also often result in excess waste generation. Compared to these traditional methods, photopolymerization induced phase separation (PIPS) offers many advantages as it can be employed at ambient conditions and utilize commercially available chemicals, forming features at multiple length scales in a single UV cure step via reaction-driven topography with no photomasks and minimal waste generation. Here, the Namib Desert beetle is taken as a guide for designing surfaces with PIPS capable of water capture from humid environments. The chemical and physical patterning that arises from PIPS makes it an ideal approach for designing complex, hierarchically-structured surfaces reminiscent of the beetle carapace. To achieve this biomimetic design, surface wrinkling and phase separation behavior during PIPS are studied in conjunction with one another, combining mechanisms often studied in isolation.
Two families of resins were studied for biomimetic coatings via PIPS: (1) an acrylonitrile and 1,6 hexanediol diacrylate comonomer system with poly(methyl methacrylate) additives, and (2) a vinyl acetate and 1,6 hexanediol diacrylate comonomer system with poly(dimethyl siloxane) additives. The inert polymer additives were initially dissolved in the comonomer solutions where, upon photopolymerization, decreased miscibility between these inert additives and the developing polymer network triggered phase separation. Examining the effects of comonomer/polymer selection, crosslink density, UV intensity, and curing environment provide a robust exploration space for investigating the interplay of phase separation, network vitrification, and interfacial energies present in the system. Control over the reaction thermodynamics and kinetics through these experimental variables resulted in heterogeneous polymer morphologies with unique chemical and physical surface patterning. Coatings from the two PIPS resins resulted in surface texturing on both the microscale and macroscale on a singular surface. Specifically, the inert polymer additive enables macroscale wrinkles to simultaneously form via depth-independent internal stresses across phase domains, while simultaneous microscale roughness arises from depth-wise mechanical gradients due to oxygen radical quenching. Chemical patterning is achieved via macroscale phase separation. Domain formation and coalescence is induced by tailoring interfacial energy interactions of the system, forming macroscale regions with differing wettabilities. Introducing materials with contrasting surface energies to form resin-material interfaces during photopolymerization can spatially direct the chemical domains as the system reorients to minimize its surface energy. Using the acrylonitrile and 1,6 hexanediol diacrylate comonomer system with poly(methyl methacrylate) additives, samples faces were produced that had stark contrasts in water contact angles, with a difference of over 50 degrees is observed between the hydrophilic and hydrophobic faces.
To better understand PIPS systems, a systematic approach using Hansen Solubility Parameters (HSP) enabled rapid screening of potential resin formulations. The evolving miscibility interactions between the resin components during photopolymerization (reacting monomer to inert polymer, reacting polymer to inert polymer, and reacting monomer to reacting polymer) were evaluated. Experimental data from the acrylonitrile system was used to benchmark predictions in using this approach to select the comonomer and inert polymer system. This screening afforded by HSP analysis allowed for the design of a vinyl acetate and 1,6 hexanediol diacrylate comonomer system with poly(dimethyl siloxane) additives, minimizing safety hazards while maintaining comparable versatility in chemical and physical patterning. These resins were used to form large scale (100 cm^2) coatings to test for water capture performance. Here, hydrophilic domains were formed through resin-water interfaces introduced at the start of photopolymerization, resulting in circular smooth domains amid roughened hydrophobic domains; patterning similar to the Namib Desert beetle. Hydrophobic PIPS surfaces with wrinkles demonstrated higher volumes of water collection compared to plain glass controls, and surfaces with chemically and physically heterogeneous domains collected the most water. This work aims to showcase the versatility of single step coating design through PIPS to produce complex chemically and physically patterned surfaces using materials that possess minimal hazards while still being commercially and economically viable.
Department: Chemical Engineering and Materials Science
Name: Tanzilur Rahman
Date Time: Tuesday, October 8th, 2024 - 1:00 p.m.
Advisor: Dr. Carl Boehlert
The main hypothesis of this research work is that thermomechanically-processed regions of wrought metals and alloys that undergo similar equivalent plastic strains exhibit similar microstructures and associated characteristics, such as mechanical and physical properties (i.e., properties that are related to the dislocation structures). That is, materials that undergo the same equivalent plastic strain should exhibit the same dislocation structures and dislocation densities, and this should then translate to similar microstructures and associated mechanical properties. It was decided that less than 10% difference would be considered acceptable for verifying this hypothesis. In this work, high-pressure torsion (HPT) was considered as the plastic deformation processing technique to produce the wrought samples. To some extent, the proposed hypothesis has been evaluated indirectly (only for hardness distribution) by the HPT research community. However, the proposed work is novel because no one has directly evaluated this hypothesis using the combined microstructure and hardness methodology proposed in this work.
The equivalent strains, which for HPT processing are a function of the number of turns, the radial distance from the disk center, and the disk height, chosen for the foci of this work were 24, 82,
123, 247, and 371. The following microstructural characterization techniques were used to evaluate this hypothesis: Optical microscopy (OM), scanning electron microscopy (SEM), X-ray diffraction spectroscopy (XRD), atom probe tomography (APT), and transmission electron microscopy (TEM). Vickers and Berkovich microhardness testing were chosen as the mechanical property characterization techniques to evaluate the hypothesis.
The model material used was Zn-3Mg (wt.%), which readily undergoes plastic deformation at room temperature (RT) without a tendency for cracking at plastic strain levels lower than 30%. There were several reasons that this material was chosen. In equilibrium, this alloy exhibits a two-phase microstructure. This material is also not difficult to prepare metallographically and obtain OM and SEM images as well as Vickers indents, and the grain size range is usually between 1-100 microns. This material is susceptible to HPT plastic deformation at normal pressures (6 GPa) and can withstand a relatively large number of turns without cracking (i.e., >30).
In addition to evaluating this hypothesis, another objective of this dissertation work was to compare the microstructure and hardness of powder-processed Zn-3Mg (wt.%) HPT disks with similar disks processed from Zn-3Mg(wt.%) cast alloys as well as hybrids of the same composition. In particular, the different hardness distributions of these multiple-phase materials and their steady-state behavior are discussed.
The overall results could neither verify nor nullify the hypothesis and the reasons for this are described in detail. However, evaluation of the hypothesis helped further understanding of the microstructural evolution that takes place during HPT processing. In addition, the results of the microstructure and mechanical property evaluation will facilitate the development of the next generation of Zn-Mg implants with improved biodegradable and mechanical properties. Overall, this work has enhanced our understanding of the effect of HPT processing on the microstructure and resulting mechanical properties of Zn-3Mg (wt.%), and this understanding can be transferred to better understand such processing on other alloys and alloy systems.
Department: Chemical Engineering and Materials Science
Name: Affan Malik
Date Time: Thursday, September 19th, 2024 - 2:00 p.m.
Advisor: Dr. Hui-Chia Yu
Energy storage technologies are key to a future of less reliance on fossil fuels and cleaner energy. Rechargeable batteries, particularly lithium-ion batteries, have become a mainstay in energy storage, notably in electric vehicles and mobile applications. However, optimizing their performance to achieve faster charging, increased capacity, and higher utilization remains a challenge. Accomplishing these goals requires a microscopic-level understanding of battery electrodes, which is hindered by their complex morphologies. Computer simulations can bridge this gap by providing insights into microstructure phenomena. A framework combining smoothed boundary method (SBM) and adaptive mesh refinement (AMR) is introduced to model and study electrode microstructures. This framework is implemented with finite difference methods (FDM) and parametrized with material properties from literature. We demonstrate the framework's usage and effectiveness with half-cell simulations of LixNi1/3Mn1/3Co1/3O2 (NMC-333) cathode through one-dimensional and three-dimensional simulations on synthetically generated microstructures. A crucial goal of our work is studying lithium plating on electrodes which is a major obstacle in realizing an electrode's true theoretical capacity and fast charging. Graphite, the predominant anode material in lithium-ion batteries, is particularly prone to lithium plating, especially at fast charging conditions. Thus, modeling graphite is critical to grasp the dynamics of lithium-ion batteries and lithium plating. Graphite anode undergoes phase transformations under lithiation. Incorporating the Cahn-Hilliard phase-field equation into the framework allows for detailed and more accurate simulations of these phase transformations in graphite anodes. Using the developed framework for graphite, we identified overcharging conditions, the influence of particle size, and the importance of pore tortuosity on real reconstructed electrodes. The framework can facilitate the design of thick electrodes, promising higher capacity without experimental construction.
Furthermore, the framework allowed us to examine two different approaches to delay lithium plating in graphite. A thermodynamic approach of hybrid anodes where we mix graphite with hard carbon and a kinetic approach of tunnels where we introduce synthetic channels in the electrode. Through our simulations, we identify that hard carbon particles act as a buffer for lithiation in hybrid anodes, delaying the surface saturation of graphite particles and thus delaying the lithium plating on graphite. On the other hand, creating tunnels generates easier paths for ion diffusion and therefore leads to better utilization of the electrode. Such channels in thick electrodes can generate high-capacity and efficient electrodes. Finally, the development of this framework culminates with a demonstration of full-cell simulations. In summary, simulating electrochemical processes in complex electrode microstructures is streamlined by the presented framework and offers a fast and robust tool for designing and studying microstructures.
Department: Chemical Engineering and Materials Science
Name: Mehrsa Mardikoraem
Date Time: Friday, August 2nd, 2024 - 2:00 p.m.
Advisor: Dr. Daniel Woldring
Proteins are vital in medicine, nanotechnology, and industry. Protein engineering designs these molecules for specific functions like catalyzing reactions or drug delivery. However, designing proteins with desired properties is challenging due to unpredictable mutation effects and complex fitness landscapes. Traditional methods like directed evolution and rational design have limitations in exploring vast sequence spaces and modeling amino acid interactions. Advances in machine learning (ML) and the increasing availability of biological data have shifted protein engineering from theory-driven to data-driven approaches. Despite progress, challenges remain in capturing nuanced protein behaviors, enhancing data quality and diversity, and developing models for complex protein-ligand interactions.
This dissertation integrates ML and computational tools with biological insights for innovative protein engineering. It focuses on designing proteins with desired properties, enhancing their numerical representations, and modeling protein-drug interactions. An ensemble approach combining traditional encodings with protein sequence language models achieved a 94% F1 score in sequence-function predictions. The study also developed a novel pipeline combining AlphaFold, molecular docking, and a heterogeneous graph neural network model (HIPO) for predicting the inhibition of drug transport proteins, crucial for drug metabolism, biodistribution, and implicated with adverse side effects from drug-drug interactions. Advancing beyond protein representation and drug interaction modeling, this work generates new-to-nature proteins with desired properties using generative models and ancestral sequence reconstruction.
By integrating biological insights with advanced ML techniques, this research enhances the capabilities of protein engineering for improved therapeutics and diagnostics.
Department: Chemical Engineering and Materials Science
Name: Christopher Herrera
Date Time: Monday, April 23rd, 2024 - 10:30 a.m.
Advisor: Dr. Richard Lunt
Interest in photovoltaics (PV) is steadily increasing with the development of building-integrated photovoltaics (BIPV). To accelerate BIPV integration, transparent PVs (TPV) have emerged to enable deployment over vision glass where visible transparency and power conversion efficiency (PCE) are equally important. Transparent luminescent solar concentrators (TLSCs) offer a promising approach to achieving high visible transparency due to a simpler module structure in the incident light path. By selectively harvesting ultraviolet (UV) and near-infrared (NIR) wavelengths, TPVs and TLSCs have a theoretical PCE limit of 20.6% for human vision. To date, TLSCs have only reported moderate PCE values with often poor or unreported operational lifetimes. This thesis focuses on modification of various luminophore classes (organic molecules, organic salts, and metal halide nanocluster salts) to provide routes to improve the performance and lifetime of TLSCs and demonstrate future applications in the agriculture sector.
Organic cyanine salts are popular luminophore candidates in TLSCs due to highly tunable, selective absorption bands with high demonstrated photoluminescent quantum yield (PLQY) in the visible region. However, they commonly suffer from poor photostability and low PLQY in the NIR region. Here, we demonstrate the surprising impact of anion exchange to dramatically enhance the lifetime of cyanine salts in a dilute environment without significantly altering the bandgap or PLQY. This enhancement results in an extrapolated lifetime increase from 10s of hours to over 65,000 hours under illumination. Using a combination of experiment and DFT computation, we demonstrate that lower absolute cation-anion binding energies generally lead to greater photostability. We then used this model to predict the stability of other anions.
Next, a class of donor-acceptor-donor (DAD) molecules are investigated to begin understanding the relationship between chemical structure and PLQY. Within this DAD class, we demonstrate a dramatic correlation between solvent environment and DAD PLQY, resulting in dramatic enhancements in PLQY with values close to 1.0. We fabricate LSCs using these DADs to report the highest single-component device performance to date.
Metal halide nanoclusters, which are precisely defined in their chemical structure, have recently been shown by our group to be a promising UV-absorbing luminophore. By changing transition metal from Mo (group 6) to Ta or Nb (group 5), the bandgap and absorption bands shift dramatically with distinct transitions present in the NIR, making them of even greater interest for TPVs and TLSCs. We explore the photophysical properties of these new compounds, contrasting them with the Mo-based clusters, and discuss pathways for TPV and TLSC integration.
Finally, we demonstrate the first plant-transparent PVs highly suitable for agricultural applications. This will initiate a new field of “transparent agrivoltaics” where the tradeoff between plant yield and power production can effectively be eliminated. We first studied the effects of varying light intensity and wavelength-selective cutoffs on commercially important crops (basil, petunia, and tomato). Despite the differences in TPV harvester absorption spectra, photon transmission of photosynthetically active radiation (PAR; 400-700 nm) is the most dominant predictor of crop yield and quality, indicating that the blue, green, and red wavebands are all essentially equally important to these plants. When the average photosynthetic daily light integral exceeds ~12 mol·m-2·d-1, basil and petunia yield and quality are acceptable for commercial production. However, even modest decreases in TPV transmission of PAR reduce tomato growth and fruit yield. The results identify the necessity to maximize transmission of PAR to create the most broadly applicable TPV agrivoltaic panels for diverse crops and geographic locations. We determine that the deployment of 10% PCE, plant-optimized TPVs over approximately 10% of total agricultural and pastureland in the U.S. would generate 7TW, nearly double the entire energy demand of the U.S.
Department: Chemical Engineering and Materials Science
Name: Chase Bruggerman
Date Time: Wednesday, April 10th, 2024 - 9:00 a.m.
Advisor: David Hickey
About 15% of enzymes rely on the cofactor nicotinamide adenine dinucleotide (phosphate) (NAD(P)+). The cofactor has a redox-active nicotinamide site, which can undergo a reversible two-electron-one-proton reduction to form NAD(P)H. The ability to control reactions involving NAD(P)H is a potential market opportunity, enabling the transformation of biological feedstocks with high safety (near room temperature) and selectivity (both regio- and stereoselectivity). However, the cost of NAD(P)+ – tens to hundreds of thousands of dollars per mole – is prohibitively high. An appealing way to lower the cost barrier is to regenerate a catalytic amount of NAD(P)H from electrochemical reduction of NAD(P)+; however, the reduction is often intercepted after the first electron transfer to give an enzymatically-inactive dimer. The ability to design systems for regenerable NADH is hindered by a lack of understanding of which structural features correlate with dimerization, and which features correlate with reduction to NAD(P)H. Cofactor mimetics (mNAD+), which retain the redox active nicotinamide site but have variable molecular structures, have been explored as a platform for understanding the structure-function relationships governing the redox behavior of these cofactors.
The purpose of the present thesis is to explore the electrochemistry of mNAD+, to understand which structural features correlate with dimerization, and how systems can be designed to favor reduction to mNADH over mNAD dimer. First, an overview will be presented of the chemistry and electrochemistry of NAD+ and mNAD+, with a special emphasis on methods of quantifying dimerization rates. The next part of the presentation explores the effect of both the molecular structure and the counterion of mNAD+ on the dimerization rate, using alternating current voltammetry. It is shown that dimerization is faster at lower reduction potentials and, counterintuitively, when sterics at the 1-position are larger; the data suggest the reduction of mNAD+X- ion pairs rather than lone mNAD+ ions. The second half of the talk will explore conditions that favor the reduction of mNAD+ to mNADH, and it is shown that sodium pyruvate favors the reduction of mNAD+ to a product that is electrochemically indistinguishable from mNADH. Evidence is provided in support of an interaction between an mNAD radical and a pyruvate radical, with mNAD increasing the rate of electron transfer to pyruvate. Finally, the impact of pyruvate on product distribution of mNAD+ is explored with bulk electrolysis experiments.
Department: Chemical Engineering and Materials Science
Name: Lincoln Mtemeri
Date Time: Thursday, April 4th, 2024 - 1:00 p.m.
Advisor: Dr. David P. Hickey
Cell-free bioelectrocatalysis has drawn significant research attention as the world transitions towards sustainable bioenergy sources. This technology utilizes electrodes to drive challenging enzymatic redox reactions, such as CO2 reduction and selective oxidation of lignin biomass. At these bioelectrochemical interfaces, enzymes are rarely capable of direct exchange of electrons with the electrode surface because many redox enzymes harbor cofactors that are buried within protein matrices that acts as an electrical insulator. In such cases, electrochemically active small molecules, called redox mediators, have proven effective in enabling efficient electron transfer by acting as electron shuttles between the electrode and enzyme cofactor. However, the task of selecting suitable redox mediators remains challenging due to lack of a comprehensive design criteria. Presently, their design relies on a trial-and-error approach that emphasizes redox potential as the only parameter while overlooking the significance of other structural features. It is crucial to acknowledge that while the redox potential of the mediator serves as a thermodynamic descriptor, it falls short in fully describing the kinetic behavior of redox mediators. In this seminar, I present our efforts in developing strategies for designing and understanding the behavior of redox species using quinone-mediated glucose oxidation by glucose oxidase as a model system.
This seminar will begin by describing the application of parameterized modeling – specifically, supervised machine learning – to identify which structural components of quinone redox mediators correlate to enhanced reactivity with a model enzyme, glucose oxidase (GOx). Through this analysis, we identified redox potential and mediator area (or molecular size) as crucial chemical parameters to optimize when designing mediators. We further explored the role of the steric parameter (i.e. redox mediator projected area) when accessing GOx via its active site tunnel. Using two complementary computational techniques, steered molecular dynamics and umbrella sampling, a rate-limiting step was identified from a series of elementary steps. Specifically, we determined that the transport of redox species in the protein tunnel constitutes the rate-limiting step in the overall process.
Utilizing molecular docking and molecular dynamics simulations, we examined a specific quinone-functionalized polymer with the goal of determining why it exhibits activity with glucose dehydrogenase (FAD-GDH) but not with GOx, despite both structurally similar enzymes exhibiting activity to the corresponding freely diffusing mediator. Docking simulations coupled with MD refinement reveal that the active site of GOx is inaccessible to the polymer-bound redox mediator due to the added steric bulk; this is in contrast to FAD-GDH which has a wider molecular tunnel to its active site.
Although, these strategies for redox mediator design and engineering were developed using GOx as a model system, a similar approach holds promise for designing systems involving other redox mediators. This work demonstrates that this technique of employing parameterized modeling in designing mediators has the potential to be applied in other bioelectrocatalytic platforms. Moreover, the computational simulations can effectively address fundamental questions where continuum models are inadequate. This integrated effort brings us closer to design of next-generation effective bioelectrodes for mediated bioelectrocatalysis.
Department: Chemical Engineering and Materials Science
Name: Thanh Tran
Date Time: Thursday, December 7th, 2023 - 1:00 p.m.
Advisor: Qi Hua Fan
In light of the escalating costs of Indium Tin Oxide (ITO), the quest for its sustainable alternatives becomes imperative. This dissertation delves into the utilization of a single-beam ion source in conjunction with magnetron sputtering to manipulate film microstructures, aiming to enhance and fabricate transparent conductive electrodes. Studies of the ion source were conducted to explore its potential applications.
In light of the escalating costs of Indium Tin Oxide (ITO), the quest for its sustainable alternatives becomes imperative. This dissertation delves into the utilization of a single-beam ion source in conjunction with magnetron sputtering to manipulate film microstructures, aiming to enhance and fabricate transparent conductive electrodes. Studies of the ion source were conducted to explore its potential applications.
Through the assistance of the ion source, an extensive range of modulation in the magnetron voltage was achieved, spanning approximately 240 to 130 V, as the ion source's voltage fluctuated from 0 to 150 V. This mechanism led to a low-voltage high-current magnetron discharge, facilitating a 'soft sputtering mode' conducive for thin film growth. Indium tin oxide (ITO) thin films were successfully deposited at room temperature by employing a combined single-beam ion source and magnetron sputtering, resulting in the creation of polycrystalline ITO thin films characterized by significantly reduced resistivity and surface roughness.
Notably, the ion beam treatment played a pivotal role in the growth of a seed layer, approximately 1 nm in thickness, enhancing the subsequent silver film's wettability. This, in turn, led to the creation of a continuous silver film of approximately 6 nm, boasting a resistivity of 11.4 µΩ.cm. This ultra-thin continuous silver film exhibited a transmittance spectrum comparable to simulation results and displayed greatly improved film adhesion on glass, as validated by the standard 100-grid tape test. High-resolution SEM images of the early growth stage show that the ion beam treatment leads to the wide spread of deposited silver whereas the films without the ion beam treatment tend to agglomerate into isolated round islands. The XRD patterns show that the (111) crystallization of silver films is suppressed with the soft ion beam treatment, whereas the growth of (200) planes is fortified. The results indicate that silver films grown on the (200) surface have less tendency to agglomerate than on the (111) surface.
However, the inherent instability of silver films posed a challenge. To address this, an approach involving a cap layer of aluminum on silver was introduced to enhance the thermal and environmental stabilities of deposited ultra-thin continuous silver films, measuring approximately 7 nm thick. The resulting film, composed of a 1 nm buffer layer of ion beamtreated silver, a layer of pure silver sputter-deposited, and a 0.2 nm nominal thick cap layer of aluminum, significantly bolstered the film's stability without a marked compromise on its optical and electrical properties. The improved environmental stability was attributed to the cathodic protection mechanism and reduced surface atom diffusivity, while the thermal stability was credited to the reduction of surface atom mobility in the presence of aluminum atoms. Further, thermal treatment of the duplex film led to an enhancement in its electrical conductivity and optical transmittance owing to an improvement in crystallinity. The annealed aluminum/silver duplex structure exhibited low electrical resistance and high optical transmittance, comparable to simulated results, positioning it among the top films reported.
The stabilized ultra-thin silver films were then leveraged to craft highly transparent and conductive electrodes on glass substrates in a sandwich structure with optimized layers of indium tin oxide (ITO). Notably, exceptional thermal stability was achieved, and annealing at 200°C in vacuum and air enhanced the film's optical and electrical performance. X-ray diffraction analysis validated the enhanced crystallization, manifested by the emergence of a silver (200) peak after air annealing. The resultant electrodes showcased outstanding transparency, conductivity, and thermal stability, positioning them favorably for architectural glass coatings and optoelectronic applications such as photovoltaics and displays.
Further computational works were conducted to study the optical performances of six different sandwich structures on glass, comprising typical transparent conductive oxides with an ultra-thin layer of silver at 6 nm and 7 nm in the middle. The study returned contour maps of average optical transmittances in 300-1200 and 400-800 nm wavelength ranges along the thicknesses of the top and bottom oxides in the 0-100 nm range with a step size of 5 nm. The simulation also provides the optimum designs and their corresponding transmittance spectra for each sandwich structure. Among tested structures, Glass/TiO2/Ag/AZO exhibited the highest average transmittance of 90.8% in the 400-800 nm range, while Glass/TiO2/Ag/SnO2 demonstrated the highest average transmittance of 83.3% in the 300-1200 nm range. These structures, along with Glass/SnO2/Ag/SnO2, are found to have good optical performance and could replace ITO in solar-cell and display applications, theoretically. This dissertation also shows some other examples of optimizing the optical performance of the structures for specific applications.
Furthermore, a case study was conducted to explore the use of tantalum-doped tin oxide (TTO) as a viable alternative to ITO. Employing a room temperature treatment facilitated by a single beam ion source, highly transparent and conductive TTO films were produced. Specifically, DC sputtering of 100 nm TTO thin films, using a TTO target (Sn(1-x)TaxO2 with x=0.02, 99.99% purity) combined with a soft ion beam generated by an ion source at 120 V, revealed that with ion beam assistance, the TTO thin film achieved a resistivity as low as 9.3 mΩ.cm and an average transmittance of 79% in the 400 nm to 1200 nm range. In contrast, without ion beam assistance, the minimum resistivity achieved was 15.9 mΩ.cm, accompanied by an average transmittance of 78% within the same wavelength range.
Department:
Chemical Engineering and Materials Science
Name:
Geeta Kumari
Date Time:
Monday, October 2, 2023 - 9:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Carl Boehlert
Alloy ATI 718Plus is a relatively new Ni-based superalloy developed to improve upon the properties of Inconel 718. It shows improvement in service temperature up to 704 ºC (55 ºC more than IN718) and formability similar to IN 718 and better than that for Waspaloy because of its chemical composition, microstructure and major strengthening phase, γ'. The microstructure plays a vital role in deciding the mechanism of particle-dislocations interaction during deformation as the mechanism changes with particle size. The understanding of active mechanism with unimodal distribution with average γ' particle size is well studied in the literature, but consideration of smaller and larger precipitates together, called bimodal, is still lacking. The study aims to understand the development of bimodal γ' precipitate size distribution in ATI 718Plus and its stability under various thermal and tensile stress conditions.
In this study, initial optimization of solutionizing temperature was conducted for subsequent aging treatment. The as-processed sample underwent heat treatment at 1000 ºC for 1 hr, followed by water quenching (WQ). The aging process encompassed single-step and two-step methods with varied parameters, including time, temperature, and cooling rate. For the two-step treatment, the sample was heated to 900 ºC for 2 hr, then quenched to room temperature before being heated to 720 ºC for 10 hr and quenched again. The resultant microstructure displayed a uniform bimodal distribution of γ' precipitates, with sizes of 11 nm and 55 nm for smaller and larger precipitates, respectively. The developed microstructures underwent tensile testing to failure to assess their yield strength (YS), ultimate tensile strength (UTS), and elongation-to-failure (εf ). Some of the tensile samples, intentionally unloaded after achieving 2-4 % engineering strain, were evaluated using transmission electron microscopy to investigate the γ' precipitate-dislocation interactions. In the case of unimodal samples, weak-pair shearing was observed to be the dominant mechanism for smaller γ' precipitates (~14 nm), while both strong-pair shearing and dislocation loops were observed for the microstructures containing larger γ' precipitates (~48 nm). The microstructure containing a bimodal distribution of γ' precipitates exhibited shearing as a dominant mechanism and also resulted in the largest strength values. The combined influence of temperature and elastic tensile stress on γ' precipitate stability was examined. Under simultaneous application of temperature and stress (creep), γ' precipitate growth accelerated in contrast to samples exposed only to temperature. The amount of growth varied in different grain orientations in the creep-deformed sample.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Chemical Engineering and Materials Science
Name:
Shrirang Sabde
Date Time:
Monday, September 18, 2023 - 8:00am
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisors: Dr. Ramani Narayan and Dr. Ganapati D. Yadav
Plastics wastes on land and in oceans has become major societal issues. Articles in print, television, and social media about plastics waste issues and bans on plastic items are on the rise everywhere in the world. Most serious is the issue relating to plastics persistence and microplastics contamination of the environment. Against this backdrop, my thesis presents work on recycling polyethylene terephthalate (PET) and Nylon 6 polymers to its individual monomer constituents by melt depolymerization using phase transfer catalyst. PET and Nylons are industrial polymers used in the manufacture of bottles, carpets, textile, fabrics, and many other products.
Melt depolymerisation of polyethylene terephthalate (PET) and Nylon 6 waste was studied using a 2-L high pressure autoclave reactor under autogenous pressure in excess water for various time intervals. Polyethylene Glycol (PEG 400) was used as a novel phase transfer catalyst for the depolymerization. An engineering model based on solid (polymer)-liquid(melt)-liquid (water) phase transfer catalysis (PTC) for hydrolytic depolymerization was developed and validated. It was found that the PEG phase transfer catalyst was found to be more efficient and effective than the standard metal catalysed depolymerization.
The PEG 400 phase transfer catalyst system was applied to the hydrolytic depolymerization of Nylon 6 (polyamide). Caprolactam monomer was obtained in 90-95% yield in 60 min using a temperature range of 200-250 0 C. The PTC catalyzed nylon 6 hydrolysis model was developed and validated. The second part of the thesis work involved synthesis of biobased and biodegradablecompostable polyesters for packaging associated with food, and paper-based products. Current carbon-carbon backbone polymers used in these applications cannot be recovered from the food/organic waste stream for recycling. They are not biodegradable and become persistent contaminants during composting of the organic/food waste stream. Therefore, there is a need for new polymer packaging that preserves and protects the integrity of the product but at the end-oflife can be readily composted along with the food/paper/organic wastes.High molecular weight (60- 80 kg/mol) polybutylene adipate co-terephthalate (PBAT), polybutylene sebacate co-terephthalate (PBSeT), and polybutylene azelate co-terephthalate (PBAzT) were synthesized. . The polymers obtained was characterized by intrinsic viscosity, acid number, and molecular weight. The extent of reaction was determined by monitoring acid group in the reaction moisture. Also, reaction kinetics has been studied for transesterification and esterification steps.
The compostability of such polyester products along with food, & paper waste was studied in a commercial scale compost bioreactor. Process parameters to operate the compost bioreactor were established. The food waste, paper and compostable products were converted to a stable, brown organic product with 70% volume reduction in 8 days. The results demonstrate that compostable plastics packaging products in concert with food/organic waste can be responsibly managed by integrating with small scale compost bioreactors.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Chemical Engineering and Materials Science
Name:
Tyler Nathaniel Johnson
Date Time:
Thursday, July 27, 2023 - 1:30pm
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Andre Lee
Surface engineering has gained significant importance in the search for composites with enhanced properties. Among the materials of interest, graphene nanoplatelets stand out due to their unique characteristics, such as exceptional mechanical, electrical, and thermal properties. However, incorporating graphene-like materials into composites poses challenges due to the chemically inert nature of the basal plane. Nano-scale surface engineering techniques are necessary to render graphene-like materials suitable for optimized composites. In addition to surface engineering at the nano-scale, this document explores surface engineering at the macro-scale for the development of innovative manufacturing methods for pouch cell battery packaging materials in next-generation electric vehicles.
Plasma processing offers a promising approach to modify the basal plane of graphene, bridging the gap between chemical and physical methods. This document investigates the effects of C4F8 and O2 plasma source gases on graphene nanoplatelets. Precise control allows low-temperature plasma treatment to modify the graphene nanoplatelet surface without altering its intrinsic structure. This provides new opportunities for surface engineering in advanced composites. Plasma treatment enables tailored immersion characteristics and the introduction of functional groups, creating desired bonding environments. Plasma treatment is a powerful and efficient method for developing graphene-based composite materials.
Currently, battery thermal management systems in pouch cell systems rely on the use of cold plates, which have limitations such as increased weight and reduced energy density. To overcome these challenges, the integration of cold plate designs into existing pouch cell materials is proposed, and a novel manufacturing method is developed.
The novel manufacturing process is based on roll-molding, which allows for easy adoption at the manufacturing level by leveraging existing roll-to-roll lamination processes. Proof-of-concept experiments are conducted using laboratory-scale equipment to demonstrate the feasibility of the proposed approach. Furthermore, the document presents insights into scaling up the manufacturing process and identifies semi-optimized rolling conditions for the production of state-of-the-art cold plate designs.
This research aims to enhance the thermal management capabilities of pouch cell battery packaging materials, improving electric vehicle battery system performance and efficiency. Lamination procedures were compared to benchmark processes, with testing conducted on the laminated samples to evaluate mechanical integrity and oxygen permeability. Additionally, advanced materials were developed as superior alternatives to current 3-layer laminates, offering enhanced properties and manufacturability. These materials have the potential to enhance battery pack performance and functionality beyond existing limitations. The findings presented in this document provide valuable insights for advancing battery packing technologies, paving the way for more efficient, reliable, and high-performance battery systems in various applications.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Chemical Engineering and Materials Science
Name:
Shaylynn Crum-Dacon
Date Time:
Thursday, July 27, 2023 - 11:00am
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Robert C. Ferrier Jr.
Epoxides, polyether precursors, are favorable materials for many applications. They have a ring strain that promotes polymerization, diverse functionalities, and are relatively easy to synthesize through sustainable means. Although epoxide polymerization can be traced back a few decades, it wasn’t until 2017, when published work by Ferrier reported using mono(μ-alkoxo)bis(alkylaluminum) (MOB) to quickly and easily polymerize different epoxides. This polymerization platform will be used to explore polyether-based single-ion conductor electrolytes in the first work. Polymer electrolytes are said to be the future for lithium batteries. By replacing the anodic materials with solid lithium (making a lithium metal battery) and the organic solvent media with a polymer, the high functioning lithium metal’s properties will increase battery efficiency. This project will dive into utilizing poly(epichlorhydrin) (PECH) and poly(propylene oxide) (PPO) to synthesize a single-ion conducting electrolyte. This work reveals synthesis of the single-ion conductor (SIC) using bis(trifluoromethanesulfanamide) as the single-ion conducting moiety. The incorporation of PPO combats the crystallinity of PECH, which is shown by Tg analysis and ionic conductivity. The knowledge gained from this research will be valuable in moving forward for solid state electrolytes for lithium metal battery applications.
Polymer composites are currently the most popular way to advance electrolyte matrices within lithium batteries, as they can eliminate the adverse properties of polymers (i.e. crystallinity and low ionic conductivity) by employing filler (solvents, ceramics, carbon powders, etc.) materials within the matrix to improve properties for desired applications. With intentions to incorporate the previous project, this work focuses on utilizing polyether-grafted nanoparticles (NPs) incorporated in a polyether matrix to study ether composites for LMB applications. An initiator was grafted onto the surface of the NPs and ECH was polymerized from the site. We alleviate compatibility concerns by using a low molecular weight ether matrix with a high molecular weight ether filler as well as explore possible electrolyte applications and combinations with the ether SIC.
The final project is intended to further epoxide use and application by expanding possible polymerization methods inspired by MOB synthesis. This work utilizes primary and secondary amine compounds to synthesize polymerization platforms. This introductory study revealed simplistic synthesis of different amine initiators that could be used in tandem with the N-Al adduct to polymerize epoxides with different functionalities. These platforms will be utilized to explore different pathways for synthesizing polymer and composite electrolytes for lithium battery applications.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Chemical Engineering and Materials Science
Name:
Genzhi Hu
Date Time:
Tuesday, July 11, 2023 - 10:00am
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Jason D. Nicholas
Reliable dissimilar material bonding is crucial in various fields, and the silver-nickel brazing technique has emerged as a promising method for joining ceramics to stainless steel. This technique offers improved mechanical bonding strengths and enhanced longevity compared to the commonly used Ag-CuO reactive air brazes. Additionally, this Particle Interlayer Directed Wetting and Spreading (PIDWAS) technique can also be used to prepare silver circuits on a variety of substrates that cannot normally be wet by molten silver. However, there is a lack of understanding regarding the mechanical and electrical behavior of circuits or current collectors produced using this technique. Furthermore, its applicability to aluminum containing stainless steel and the feasibility of using alternative interlayer materials remain uncertain.
To address these gaps, this dissertation focuses on investigating the mechanical and electrical performance of Ag-Ni circuits created through the PIDWAS technique. The bonding strength between alumina substrates is examined and compared to commercially available silver pastes such as Heraeus C8710 and DAD-87. The sheet resistivity on alumina and contact resistivity on lanthanum strontium manganite are evaluated to assess the electrical properties of Ag-Ni current collectors. The findings demonstrate that PIDWAS-produced Ag-Ni layers exhibit better overall performance than conventional Ag contact pastes for circuit and current collector applications.
Furthermore, this research explores the feasibility of utilizing the Ag-Ni PIDWAS brazing technique for aluminum containing stainless steel and investigates the mechanical, electrical, and durability aspects of the resulting braze joints. The braze joints are comprehensively evaluated under various conditions, including as-produced, air annealed, reduction-oxidation (redox) cycled, and rapid thermal cycled states. The results indicate that Ag-Ni brazes effectively getter and stabilize unwanted aluminum from the substrate, highlighting its potential for applications involving aluminum containing stainless steel.
Additionally, a novel PIDWAS brazing technique using Ag-Pt is introduced in this work. The mechanical and electrical performance, as well as the microstructure changes of Ag-Pt brazes, are evaluated in as-produced, air annealed, redox cycled, and rapid thermal cycled conditions. The results demonstrate that Ag-Pt brazes outperform Ag-Ni brazes in oxidizing environments. The potential application of Ag-Pt brazes in other systems is also discussed. In summary, this work demonstrates that 1) different PIDWAS interlayer materials can be used to promote the wetting and spreading of molten silver, and 2) these interlayers can also be used to chemically getter undesirable surface-segregating substrate components.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Chemical Engineering and Materials Science
Name:
Gouree Kumbhar
Date Time:
Friday, May 26, 2023 - 1:00pm
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Robert C. Ferrier, Jr.
Epoxides are a promising polymer materials platform because of their diverse functionality, ease of synthesis, availability, and ring strain favoring polymerization. Recently reported mono(μ-alkoxo)bis(alkylaluminum) (MOB) based polymerization technique provides controlled molecular weight polymers for wide variety of functional epoxides without chain transfer. We want to use this facile polymerization platform to create polymers with orthogonally addressable pendant groups to precisely tune polymer properties. Specifically, this work focuses on incorporation of charged moieties through post polymerization modification of functional pendent groups to investigate their transport and self-assembly properties.
We have demonstrated control over molecular weight, composition, and architecture via copolymerization of propargyl glycidyl ether (PGE) and epichlorohydrin (ECH), with functional alkyne and chloromethyl groups respectively. Molecular weights up to 100 kg/mol with narrow distributions were achieved. Copolymer composition was varied by incorporating increasing ratios of PGE (20-80%) in the polymerization feed. In situ 1H NMR kinetic study was performed using two different systems that is MOB and separate initiator-catalyst to determine reactivity ratios. With the use of Meyer-Lowry method reactivity ratios were calculated as rPGE = 0.69 and rECH = 1.43 for MOB system, and rPGE= 0.72 and rECH= 1.48 for separate initiator-catalyst system. So, in both cases rPGE×rECH ≈1 which confirms the statistical nature of the copolymer with preferred addition of ECH to growing chain end regardless of polymerization technique.
These precursor copolymers were further modified with various charged groups such as imidazole and sulfonate via orthogonal chemistry through the chloromethyl and alkyne moieties. This will be beneficial in achieving tuned compositional control of structure–property relationships in a polyether materials platform. These functional polyethers were then used to create economical crosslinked networks to prepare amphoteric ion exchange membranes (AIEMs). Nafion ion exchange membranes have been used in vanadium redox flow batteries (VRFB) applications owing to their good ionic conductivity and excellent chemical and mechanical stability. But nafion’s high cost, excessive swelling and low ion selectivity limits its use for commercialization. AIEMs have potential for preventing vanadium ion penetration thus increasing ion selectivity. Membranes were synthesized by grafting of novel ECH and PGE-based charged copolymer S-P(PGE-stat-ECH) to the PVDF-co-HFP membrane matrix. We studied the physicochemical, electrochemical, and surface properties of these membranes to investigate candidacy of this novel membrane for VRFB application.
Next, we used a homopolymer of allyl glycidyl ether (PAGE) as a unifying platform for polyelectrolyte design. With the use of click chemistry we created polyether based polyanions and polycations to study effect of charge and molecular weight on self-assembly. We studied effect of NaCl and LiCl salt as well on polyelectrolyte self-assembly with varying polyanions to polycation ratios. Coacervation formations was studied using absorbance measurements on UV-vis spectrophotometer. With the use of MOB polymerization platform, we can synthesize variety of polymers, and this will be useful in exploring effects of counter-ions, polymer architecture, charge densities in future. Our synthetic platform provides control over different governing parameters separately which will be impactful in giving insights on polyelectrolyte self-assembly from fundamental standpoint. We expect the broader impacts of this research to encompass innovation in polyelectrolyte design and application.
In conclusion, we demonstrated control over factors such as molecular weight, polymer architecture, charge density, monomer sequence, and counter-ions independently with the use of this platform. We have utilized these materials to further develop AIEMs for electrochemical application and to study charged polymer self-assembly.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Chemical Engineering and Materials Science
Name:
Jiawei Lu
Date Time:
Friday, May 5, 2023 - 10:00am
Location:
3405 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Thomas R. Bieler
The Ti-6Al-4V (Ti64) alloy has been widely used as a light-weight structural material due to its excellent corrosion resistance and high strength even at elevated temperatures. However, the poor machinability of Ti64, leading to higher costs, has severely limited its application. The formation of segmented chips rather than smooth continuous chips, caused by the intrinsic low thermal conductivity of Ti64, is of great interest and significance for investigation.
Ti64 bars with various microstructures, namely mill-annealed (MIL), elongated (ELO), solution treated and aged (STA), and lamellar (LAM), were machined at 61, 91, and 122 m/min. The chips were collected, and their microstructures were characterized by scanning electron microscopy (SEM) and electron backscattered diffraction (EBSD). The morphology of these chips was also measured, and observations of the smooth and segmented sides were also made and compared.
For STA chips, nano-indentation and EBSD were used to investigate local shear strain phenomena. An existing continuum model based upon material constants and mechanical properties was used for shear band width prediction at various cutting speeds and the predicted values were compared with the measured values and discussed. In addition, a model based on the morphology of the segmented chips was adopted to calculate the homogeneous shear strains in the segments and catastrophic shear strains within the shear bands. Representative examples of chips were characterized by EBSD and analyzed using the stress tensor obtained from finite element numerical simulation. Finally, the chips were annealed at 500, 600 and 650 ℃ to investigate their response to annealing, revealing effects of the chip deformation history. For LAM, a few EBSD scans were also carried out to show the correlation between chip morphology and local orientations.
Overall, the work presented in this study demonstrates an approach to investigating the formation of segmented chips and the severe deformation during turning. It can be further applied to the chips obtained from other machining methods and to identify effects of higher cutting speeds.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Chemical Engineering and Materials Science
Name:
Yan Xie
Date Time:
Wednesday, May 3, 2023 - 1:00pm
Location:
2250 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Scott Calabrese Barton
Cascade reactions have attracted great attention in the fields of chemical synthesis, biofuel cells, and biosensors due to multiple benefits, including reduced waste generation and minimal purification requirements. They involve a sequence of chemical transformations that take place within a single reactor. In such reactions, the product of an individual reaction step, described as an intermediate, becomes the product of the following reaction step. The transport of these intermediates between neighboring active sites often faces the challenge of desorption into the bulk solvent, as well as competition with the side reactions. The efficiency of cascade reactions is therefore often limited by intermediate transport.
Nature has evolved several substrate channeling strategies to enable the direct transfer of intermediates between adjacent active sites, such as molecular tunneling, chemical swing arms, spatial organization, and electrostatic channeling. All of these mechanisms guide the transport of intermediates in sequential cascade reaction steps from the generation site to the consumption site. In this work, molecular dynamics simulations were performed to computationally understand the mechanisms of electrostatic channeling and molecular tunneling. Firstly, we studied the electrostatic channeling of glucose-6-phosphate (G6P) on a poly-arginine peptide connecting the sequential enzymes of hexokinase (HK) and glucose-6-phosphate dehydrogenase (G6PDH). The incorporation of a positive peptide bridge guides the direct transfer of negative G6P from HK to G6PDH via electrostatic interaction and prevents wasteful desorption. Metadynamics is used in conjunction with molecular dynamics simulation to quantify the hopping rate of G6P on the bridge. According to lag time calculations observed via a kinetic Monte Carlo model, a poly-arginine bridge is more efficient at channeling G6P compared to the previously studied poly-lysine bridge.
A more complex model of electrostatic channeling was then considered, namely the malate dehydrogenase–citrate synthase complex of the citric acid cycle. The negatively charged intermediate oxalacetate (OAA) travels along a positive surface on the enzyme complex. A Markov state model (MSM) identified the dominant pathway and four bottleneck residues. The residues formed the highest energy area, trapping the movement of OAA. By conducting a hub score analysis and measuring channeling probabilities, we verified that replacing the experimentally determined positive key residue Arg65 with the neutral residue Ala65 led to a 50% reduction in channeling probability, as observed experimentally. This occurred because the mutation caused a disruption of the continuous positive surface pathway of OAA.
The mechanism of molecular tunneling was studied for an ammonia tunnel in the asparagine synthetase system. Combining molecular dynamics with umbrella sampling, energy profiles were constructed for both the original structure and the mutant structure with an alanine → leucine replacement. The largest energy barrier was identified at the narrowest area of the tunnel formed by several bottleneck residues. Due to its larger side chain, leucine caused a narrowing of the tunnel when it replaced alanine in the mutant structure, resulting in the blockage of NH3, and thus an increase in the local energy profile. We also identified the possible desorption paths of NH3, which would allow the escape of NH3 after the mutation. The increased desorption probability along these paths is consistent with decreased enzyme activity as observed in experiments, due to inefficient NH3 transfer after mutation.
Finally, the enzymatic interaction between hexokinase (HK) and glucose-6-phosphate dehydrogenase(G6PDH) was studied with coarse-grained molecular dynamics (CG MD). CG MD simplified the complex system of HK-bridge-G6PDH by grouping several neighboring atoms into a coarse-grained bead and enabled the simulation timescales up to microseconds. Long simulation scales allowed the observation of enzymatic configuration change. The relative rotation of G6PDH shows an electrostatic interaction between the enzymes, which is dependent on ionic strength.
Overall, this work computationally examines the mechanisms of substrate channeling at an atomic level and acts as a guide to design efficient artificial cascades with substrate channeling.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Chemical Engineering and Materials Science
Name:
Aditya Patil
Date Time:
Friday, April 14, 2023 - 3:00pm
Location:
3405A/B Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Andre Lee
Functionalizing the incompletely condensed octaphenyl double-decker silsesquioxane tetrasilanol, Ph8-DDSQ(OH)4, with reactive dichlorosilanes forms condensed, hybrid molecules with reactive organic groups on the opposite edge of the inorganic SiO1.5 core, surrounded by phenyl moiety. This unique phenyl moiety surrounded SQ core provides additional thermo-oxidative stability for high temperatures, organic thermoplastics, and thermosetting polymers. Unlike corner-capped functional POSS, condensing DDSQ with dichlorosilane enables different chemical moieties on the opposite side of the SQ core, allowing SQ to act as the "bridging" chemical needed for bonding two different classes of materials. In addition, another benefit that is unique when condensing DDSQ-tetrol is the formation of isomers. Ph8DDSQ(OH)4, when fully condensed with R1R2SiCl2, is the formation of conformational isomers or regioisomers. (isomers about the SQ core) The conformational isomer mixture often exhibited lower liquidus temperature than pure isomers, which benefits when mixed with organic resins at lower temperatures. Structural isomerism is the most radical type wherein the two compounds have the same number of atoms, but their chemical and physical properties are entirely different since they have logically distinct bonds. This work examines an asymmetrically capped DDSQ system synthesized as a coupling agent between graphene oxide (GO) nanofiller and styrene vinyl ester (VE) resin. The DDSQ-modified GO is dispersed into VE resin with only a simple mechanical stirring at room temperature. This work studies the different isomers of the DDSQ system. Firstly, meta and para isomeric moieties of phenyl ethynyl phenyl (PEP) dichlorosilane were obtained via the Sonogashira reaction and by subsequent reaction with trichlorosilane. The synthesized dichlorosilanes were reacted with DDPh8T8(OH)4, and the relevant reaction conditions and yield were presented. The reaction led to the formation of cis and trans isomers. These isomers form a eutectic mixture with a sharp melting point upon varying the ratio of cis and trans products. Upon using a mixture of meta and para dichlorosilanes as capping reagents, the reaction yielded a 6-isomer mixture of compounds. This isomeric mixture didn't exhibit sharp melting characteristics as the individual isolated compounds exhibited. The sharpness of the solid-liquid transition character can also be dampened when long-chain chlorosilanes are used as capping agents for tetrol.
This work also investigated the effect of constitutional isomerism in cage-like silsesquioxanes. Precisely, edge-open octaphenyl silsesquioxane diol condensed with tetramethyl dichlorosiloxane and double-decker-shaped silsesquioxane tetraol condensed with dimethyl dichlorosilanes form structural isomers. The interactions between the organic group bonded to the D-Si and the adjacent phenyl group connected to the T-Si of DDSQ molecules have a defining effect on the internal configuration of the DDSQ cage. This change affects phase transformation between liquid and solid states, forming a glassy state in a pure isolated compound.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Chemical Engineering and Materials Science
Name:
Jin Dai
Date Time:
Wednesday, March 22, 2023 - 1:00pm
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Wei Lai
Lithium-ion batteries, based on the pioneering work of three Nobel Laureates, are everywhere in our lives from portable electronics, electric vehicles, to grid storage. However, they currently employ liquid electrolytes containing flammable organic solvents that could lead to a fire if batteries are overheated. Solid electrolytes, also called fast-ion conductors or superionic conductors, are alternatives with the uttermost safety. Among various solid electrolytes, lithium garnet oxides are a promising family of materials due to their high ionic conductivity and electrochemical stability. This work discusses the study of diffusion and conduction in LixLa3Zrx-5Ta7-xO12 garnet oxides using computational methods. We developed two new generations of interatomic potentials, induced dipole, and machine learning, for this composition series. We compared them with existing interatomic potentials in terms of force/virial error against density-functional theory, prediction of phase transition, self- diffusivity, and ionic conductivity, and found machine learning interatomic potentials have the best accuracy. We then applied machine learning interatomic potentials to investigate the temperature and composition dependence of diffusion and conduction in bulk materials and the influence of grain boundary structure on ionic conductivity. We believe that the atomic insight obtained from this work could be worthwhile in understanding the bottleneck of materials performance and could provide guidance on further improvements.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Chemical Engineering and Materials Science at 355-5135 at least one day prior to the seminar; requests received after this date will be met when possible.
Department: Civil and Environmental Engineering
Name: Francis Hanna
Date Time: Tuesday, November 5th, 2024 - 3:00 p.m.
Advisor: Dr. Annick Anctil
As the clean energy transition unfolds, the use of renewable energy and electric vehicles (EV) has increased rapidly over the past decade and is expected to grow further. Solar and battery demands are expected to reach 29 PWh and 13 PWh by 2050, respectively. The clean energy transition is vital to meet climate goals, but is met with challenges such as future battery waste generation, and the availability and environmental footprint of energy materials.
Cadmium-telluride (CdTe) is one of the world's leading thin-film photovoltaic (PV) technologies. CdTe PV relies on tellurium, a scarce metal mainly recovered as a by-product from copper electrorefining anode slimes. Several studies investigated the availability of tellurium and used life cycle assessment (LCA) to evaluate its environmental impact. However, previous availability studies are static and do not reflect tellurium supply, demand, and price interconnection. Previous LCA studies do not reflect the industrial best practices for tellurium recovery. This study develops a system dynamics model to assess the tellurium availability between 2023 and 2050 under different demand scenarios. All demand scenarios exhibit a tellurium supply gap. The results show that recycling retired solar panels and improving tellurium yield from copper electrorefining are efficient mitigation approaches. An LCA is also conducted to evaluate the environmental impact of tellurium recovery from copper electrorefining based on different production methods and locations. The environmental impact of tellurium varies by production location and method. Tellurium recovery in the USA via pyro-hydrometallurgical treatment of anode slimes reduces the freshwater toxicity and resource depletion of CdTe semiconductors by 44% and 42%, respectively, compared to the worst-case scenario. The results show that previous studies underestimates the environmental impact of tellurium and, as a result, underestimates the freshwater toxicity and abiotic depletion potential of CdTe solar panels by 35% and 50%, respectively.
The environmental impact of batteries depends on the source of virgin materials, and the recycled materials content and recovery method. Recycling helps manage future battery waste while providing a domestic supply source. But the environmental impact of recycled materials remains unclear. A comprehensive assessment of the environmental impact of conventional and new recycling methods is needed. The environmental impact of batteries also depends on the production location, the energy source, and the final battery chemistry. In this dissertation, a configurable LCA tool is developed to assess the environmental impact of batteries for different supply chain scenarios. This tool is first used to evaluate and compare three LIB recycling methods: 1) conventional hydrometallurgy (CHR), 2) truncated hydrometallurgy (THR), and 3) pyrometallurgy (PR). The same tool is used to evaluate the effect of recycled content on new batteries. Finally, multiple scenarios are evaluated to assess the environmental effect of reshoring the battery supply chain to the US. The results show that THR reduces the carbon footprint, water consumption, freshwater toxicity, and resource depletion potential of new batteries by 87%, 72%, 50%, and 36%, respectively, compared to CHR and PR. The effect of recycled materials on the environmental impact of new batteries varies by impact category and depends on the recycling method and the source of primary materials being replaced. In a best-case scenario, 100% recycled content can reduce LIB cells' carbon footprint and freshwater toxicity by 50% and 61%, respectively. However, water consumption and scarcity footprint improve only when high-impact virgin materials are replaced with recycled materials recovered via pyrometallurgy. Further analysis shows that offshoring the battery supply chain leads to the highest battery cell environmental footprint. Alternatively, batteries produced in Canada have the lowest impact, driven mainly by a cleaner electricity grid and source of primary materials. The environmental impact of 100% US-made batteries largely depends on the source of primary materials, specifically lithium and nickel. Increasing renewable energy contribution to 1.75 kWh/kWh cell produced can alleviate the high environmental impact of domestic nickel and lithium and reduce the environmental footprint of 100% US-made batteries.
Department: Civil and Environmental Engineering
Name: Preet Lal
Date Time: Tuesday, September 10th, 2024 - 12:00 p.m.
Advisor: Narendra Das
Soil moisture is a critical component of the Earth's water cycle, essential for various environmental and agricultural processes, and its significance is further underscored by the impacts of climate change. The change in soil moisture patterns can have profound implications for hydrological dynamics, agricultural productivity, and ecosystem sustainability. To understand these changes, an initial study was conducted to examine the long-term spatiotemporal evolution of soil moisture and its interactions with key hydrometeorological parameters using coarse-resolution data. Over a 40-year period, it was found that approximately 50% of the global vegetated surface layer (0-7 [cm] depth) experienced significant drying. Conversely, only 9% of the global vegetated area showed an upward trend in soil moisture, largely attributed to increasing precipitation levels. While these results provide valuable insights into broad-scale soil moisture trends and their primary drivers, and highlight the limitations of coarse-resolution data, which fail to capture the finer-scale processes and anthropogenic influences that are critical for understanding micro-scale feedback mechanisms.
However, the retrieval of high-resolution soil moisture products at a global scale can be achieved in this “Golden Age of SAR”. Among the upcoming L-band SAR missions, NISAR is in the final stages of preparation for launch. Therefore, taking advantage of the upcoming NISAR mission, an algorithm for high-resolution soil moisture retrieval is proposed i.e., “multi-scale” soil moisture retrieval algorithm. This algorithm is based on the disaggregation approach which combines the coarse-resolution (9 [km]) soil moisture data with fine-scale co-polarization and cross-polarization backscatter measurements to retrieve high-resolution soil moisture. The algorithm can take input of coarse resolution soil moisture either from satellite radiometer-based or climate model data. In this study, European Center for Medium Weather Range Forecast (ECMWF) ERA5-Land reanalysis data were used as an input coarse resolution soil moisture data. The ECMWF assimilates a large number of satellite and in-situ information to produce overall very reliable datasets. The major advantage of choosing the input dataset from climate model reduces dependency on satellite mission lifetimes. The end goal of the algorithm is to remove dependencies on any complex modeling, tedious retrieval steps, or multiple ancillary data needs, and subsequently decrease the degrees of freedom to achieve optimal accuracy in soil moisture retrievals. The proposed algorithm targets a spatial resolution of 200 [m], a specific spatial resolution determined based on the user requirements. However, currently due to the unavailability of NISAR data, similar L-band data from UAVSAR acquired during the SMAPVEX-12 campaign and ALOS-2 SAR were utilized for algorithm calibration and validation. The algorithm has been initially tested on selected agricultural sites. The retrieved high-resolution soil moisture was validated with in-situ measurements, and the ubRMSE was below 0.06 [m³/m³], meeting the NISAR mission accuracy goals. Additionally, given the SAR's ability to provide fine-resolution backscatter measurements at 10 [m] spatial resolution. The analysis was conducted at spatial resolutions of 100 [m] and 200 [m] across various hydrometeorological settings globally. This includes sites from polar to arid regions and diverse land use. This retrieval and validation were performed using the ALOS-2 L-band SAR time-series data. The retrieved soil moisture at both spatial resolutions showed consistent patterns, with the finer 100 [m] resolution have more detailed information. The validation statistics show that the algorithm consistently maintained an ubRMSE below 0.06 [m³/m³] at both 100 [m] and 200 [m] spatial resolutions. The performance of the algorithm, even in forested regions with dense canopies, presents the robustness of the algorithm. This is attributed to the L-band SAR frequency's higher penetration capability.
However, since these validation statistics are based on limited sites, there is a need to calculate the error in the soil moisture retrieval for each grid to ensure comprehensive accuracy. Recognizing the limitations of in-situ measurements, which are sparse and geographically constrained, an analytical approach to estimate uncertainty in high-resolution soil moisture retrievals for the NISAR mission is also proposed. This approach accounts for errors in the input datasets and algorithm parameters. The approach was applied on the UAVSAR datasets from the SMAPVEX-12 campaign and compared with the ubRMSE for different crop types. The uncertainty estimates closely matches the ubRMSE, demonstrating the robustness of the analytical approach. Overall, this study demonstrates the effectiveness of the proposed algorithm for high-resolution soil moisture retrieval for the NISAR mission and future SAR missions, with the potential to achieve spatial resolutions finer than 100 [m].
Department: Civil and Environmental Engineering
Name: Hamad Bin Muslim
Date Time: Tuesday, October 29th, 2024 - 1:00 p.m.
Advisor: Dr. Syed Waqar Haider
Hot-mix asphalt (HMA) compaction at longitudinal joints is critical for pavement performance and longevity. Many highway agencies face challenges maintaining deteriorated joints, often resulting in issues like raveling along the centerline. Despite extensive research and training on proper HMA placement and compaction, joint deterioration remains a leading cause of premature flexible pavement failure. Improving joint compaction during construction is critical to better pavement performance. The longitudinal joint construction includes various methods— differing laying conditions, joint geometry, rolling patterns, and techniques. While each has advantages, these methods also carry risks in consistently achieving optimal compaction. Current quality assurance (QA) methods, such as coring and density gauges, are labor-intensive, time[1]consuming, costly, and offer limited coverage, increasing the likelihood of missing low-density areas. The variability in construction methods and limitations of traditional QA testing raises the risk of inadequate joint compaction, potentially compromising pavement's durability and performance.
The Dielectric Profiling System (DPS) offers a nondestructive alternative for assessing compaction quality, providing continuous real-time coverage by measuring dielectric values, which correlate with HMA density but need a calibrated relationship. Adopting DPS for QA testing requires alternative methods (other than air voids) to quickly assess joint density during construction. This study compared various longitudinal joint construction methods using dielectric measurements from Minnesota and Michigan road projects. The continuous dielectric data were discretized into subsections for analyses using relative dielectric differences that indicated over 2% more air voids at the joint than at the mat.
This study used a coreless calibration method with lab-prepared pucks to develop a new model for converting dielectric values to predicted air voids for similar analyses. Project- and group-wise calibrations were performed; project-specific models aligned well with cores collected during DPS and QA testing. Minor HMA production fluctuations across different days displayed minimal impact on air void predictions. Additionally, HMA mixtures were grouped for group-wise calibrations using recorded dielectric values and mix characteristics, which demonstrated reasonable accuracy. This approach highlights the potential for direct DPS data use in the field without needing project-specific models.
Statistical analyses revealed that unconfined joints had the highest air void content, with 50 to 100% of subsections showing significant differences, indicating over 2% more air voids than the adjacent mat. Additionally, 60 to 100% of unconfined joint subsections fell below the 60% Percent Within Limits (PWL), the rejectable quality level (RQL). In contrast, all other joint types showed similar compaction to the mat, with negligible subsections below 60% PWL. These findings were consistent when using predicted air voids. Similarly, the probabilistic analysis showed a 30 to 60% likelihood that unconfined joints had significantly lower dielectric values than the mat, while other joints exhibited minimal differences or better compaction.
This study introduces a Longitudinal Joint Quality Index (LJQI) that enables the direct use of dielectric values to enhance the field applicability of DPS. A threshold of 70% LJQI was established for joint quality acceptance. LJQI comparisons revealed that unconfined joints had higher void content than the adjacent mat in 11 to 89% of stations across multiple projects. According to all the analyses conducted, it was consistently found that constructing either butt or tapered joints while avoiding unconfined joint construction can lead to achieving better joint density. Moreover, it has been observed that smaller subsections are efficient in identifying local compaction problems, and for practical reasons, it is suggested to use 100 ft subsections during analyses.
Many State Highway Agencies (SHAs) rely on specifications that focus on as-constructed air voids to assess construction quality and determine pay factors (PF) for contractor payments, often neglecting the performance of longitudinal joints. This study proposes a Performance-Related Specification (PRS) framework that leverages the DPS's continuous data to link joint service life to void content, used as the Acceptance Quality Characteristic (AQC). By using air void content as AQC and PWL quality measure, SHAs can more accurately assess joint quality and make informed pay adjustments, ensuring durable, high-quality pavements while minimizing overpayments.
Department: Civil and Environmental Engineering
Name: Zheng Li
Date Time: Wednesday, September 4th, 2024 - 1:00 p.m.
Advisor: Dr. Alison Cupples
Microorganisms play important roles in complex and dynamic environments such as agricultural soils and contaminated site sediments. Molecular methods have greatly advanced the understanding of microbial processes, such as nitrogen cycling, carbon cycling and contaminant biodegradation, by providing insights into the structure, function and dynamics of microbial communities.
The first project evaluated the impact of four agricultural management practices (no tillage, conventional tillage, reduced input, biologically based) on the abundance and diversity of microbial communities regulating nitrogen cycling using shotgun sequencing. The relative abundance values, diversity and richness indices, taxonomic classification and genes associated with nitrogen metabolism were examined. The microbial communities involved in nitrogen metabolism are sensitive to varying soil conditions, which in turn, likely has important implications for N2O emissions. This work was conducted virtually during the COVID pandemic.
The second project examined the impact of plant diversity, soil pore size, and incubation time on soil microbial communities in responses to new carbon inputs (glucose). Soil cores from three plant systems (no plants, monoculture switchgrass, and high diversity prairie) were incubated with labeled and unlabeled glucose. The phylotypes responsible for the carbon uptake from glucose were identified using stable isotope probing (SIP). The microbial communities were influenced by plan diversity but not by pore size or incubation time. The differentiated carbon assimilators may be linked to different carbon assimilation strategies (r- vs. K-strategists) depending on pore size.
The third and fourth projects focused on the biodegradation of the common groundwater contaminant, 1,4-dioxane. 1,4-Dioxane was commonly used as a stabilizer in 1,1,1-trichloroethane formulations and is now frequently detected at sites where the chlorinated solvents are present. A major challenge in addressing 1,4-dioxane contamination concerns chemical characteristics that result in migration and persistence. Given the limitations associated with traditional remediation methods, interest has turned to bioremediation to address 1,4-dioxane contamination.
The third project examined the impact of yeast extract and basal salts medium (BSM) on 1,4-dioxane biodegradation rates and the microorganisms involved in carbon uptake from 1,4-dioxane. For this, laboratory sample microcoms and abiotic controls were inoculated with three soils and amended with media (water or BSM and yeast) and 2 mg/L 1,4-dioxane. SIP was then utilized to identify the active phylotypes involved in the 1,4-dioxane biodegradation. The amendment of BSM and yeast enhanced the 1,4-dioxane degradation in all three soil types. Gemmatimonas, unclassified Solirubacteraceae and Solirubrobacter were associated with carbon uptake from 1,4-dioxane and may represent novel degraders. Solirubrobacter and Pseudonocardia were associated with propane monooxygenases genes which potentially function in 1,4-dioxane biodegradation.
The fourth project further explored the impact of yeast extract on 1,4-dioxane degradation at low concentrations (< 500 mg/L) using sediment from three impacted sites and four agricultural soils. 1,4-Dioxane biodegradation trends differed between inocula sources and treatments. For two of the impacted sites, no 1,4-dioxane biodegradation was observed for any treatment, indicating a lack of 1,4-dioxane degraders. In contrast, 1,4-dioxane degradation occurred in all treatments in microcosms inoculated with the agricultural soil or the other impacted site sediments. Bioaugmentation with agricultural soils initiated 1,4-dioxane biodegradation in the sediments with no intrinsic degradation capacities. Overall, yeast extract enhances 1,4-dioxane biodegradation in specific sediments. Bioaugmenting site sediments with agricultural soils may represent a promising approach for the remediation of 1,4-dioxane contaminated sites.
Department: Civil and Environmental Engineering
Name: Xuyang Li
Date Time: Friday, August 23rd, 2024 - 12:00 p.m.
Advisor: Nizar Lajnef
The convergence of artificial intelligence (AI) with engineering and scientific disciplines has catalyzed transformative advancements in both structural health monitoring (SHM) and the modeling of complex physical systems. This dissertation explores the development and application of AI-driven methodologies with a focus on anomaly detection and inverse modeling for domain-specific and other scientific problems.
SHM is vital for the safety and longevity of structures like buildings and bridges. With the growing scale and potential impact of structural failures, there is a dire need for scalable, cost-effective, and passive SHM techniques tailored to each structure without relying on complex baseline models. We introduce Mechanics-Informed Damage Assessment of Structures (MIDAS), which continuously adapts a bespoke baseline model by learning from the structure's undamaged state. Numerical simulations and experiments show that incorporating mechanical characteristics into the autoencoder improves minor damage detection and localization by up to 35% compared to standard autoencoders.
In addition to anomaly detection, we introduced NeuralSI for structural identification, estimating key nonlinear parameters in mechanical components like beams and plates by augmenting partial differential equations (PDEs) with neural networks. Using limited measurement data, NeuralSI is ideal for SHM applications where the exact state of a structure is often unknown. The model can extrapolate to both standard and extreme conditions using identified structural parameters. Compared to data-driven neural networks and other PINNs, NeuralSI reduces interpolation and extrapolation errors in displacement distribution by two orders of magnitude.
Building on this approach, we expanded our focus to broader systems modeled by parameterized PDEs, which are prevalent in various physical, industrial, and social phenomena. These systems often have unknown or unpredictable parameters that traditional methods struggle to estimate due to real-world complexities like multiphysics interactions and limited data. We introduce NeuroPIPE, which estimates unknown field parameters from sparse observations by modeling them as functions of space or state variables using neural networks. Applied to several physical and biomedical problems, NeuroPIPE achieves a 100 times reduction in parameter estimation errors and a 10 times reduction in peak dynamic response errors, greatly enhancing the accuracy and efficiency of complex physics modeling.
Bio: Xuyang Li is a dual Ph.D. candidate in Civil Engineering and Computer Science at Michigan State University, where he is co-advised by Prof. Nizar Lajnef and Prof. Vishnu Boddeti. Li’s research interests lie in leveraging domain knowledge to advance machine learning, particularly in physics-informed machine learning for dynamic system modeling. He has worked on machine learning-based spatial-temporal modeling, anomaly detection, and parameter estimation in various dynamic systems, along with finite element modeling.
Department: Civil and Environmental Engineering
Name: Liang Zhao
Date Time: Tuesday, August 20th, 2024 - 2:00 p.m.
Advisor: Dr. Irene Xagoraraki
In the recent decades we have witnessed numerous outbreaks worldwide, resulting in millions of infections and deaths. Examples include the 1918 H1N1 virus, the 1968 H3N2 virus, the 2003 SARS coronavirus, the 2012 MERS-CoV, and the 2019 SARS-CoV-2. Factors including rapid population growth, escalating climate change crisis, recurring natural disasters, booming immigration and globalization, and concomitant sanitation and wastewater management challenges are anticipated to exacerbate the frequencies of disease outbreaks in the years to come. The traditional disease detection system primarily relies on the diagnostic analysis of specimens collected from infected individuals in clinical settings. This approach has significant limitations in predicting and providing early warnings for impending disease outbreaks. Infected individuals are often tested only after the development of symptoms, and health authorities are usually notified following the inception of a disease surge. Consequently, health authorities respond reactively instead of taking proactive measures during a pandemic. Additionally, clinical data collected by traditional disease surveillance systems often fail to accurately reflect actual infections in communities when asymptomatic infections are dominating, clinical testing is incapable to capture comprehensive infections, limitations in testing supplies and accessibility, and patients’ testing behaviors. Environmental surveillance, especially wastewater surveillance or wastewater-based epidemiology, allows analyses of environmental community composite samples. Municipal wastewater samples are composite biological samples of an entire community that represent a snapshot of the disease burden of the population covered by the corresponding sewer-shed. Collecting and analyzing untreated wastewater samples from centralized wastewater treatment plants and neighborhood manholes for specific viral and bacterial targets at a regular cadence can reveal the trends of pathogen concentrations in wastewater. These trends represent the viral and bacterial loads shed by infected individuals, whether they are symptomatic or asymptomatic. Based on measured wastewater concentrations of disease pathogens and other available datasets such as clinical and demographic datasets, researchers can establish models to predict disease incidences before clinical reporting and develop tools to provide early warnings of upcoming surges of diseases. This crucial information can help public health officials in making informed decisions regarding the implementation of preparedness measures and the allocation of resources. The primary objective of this dissertation is to develop comprehensive laboratorial, technological, and translational methodologies for forecasting viral and bacterial outbreaks through wastewater-based epidemiology.
Bio: Liang Zhao is a fourth-year PhD candidate in environmental engineering at Michigan State University. In his doctoral studies at MSU, Liang has used molecular microbiology laboratory techniques, mathematical tools, statistical and visualization methods to develop pre-emergence systems that enable health departments and practitioners to utilize environmental surveillance to determine early warnings and predict infections of existing and emerging human communicable diseases, including COVID-19, norovirus, RSV, and sexually transmitted infections of Chlamydia and Syphilis. He has closely worked on wastewater surveillance projects with the Michigan Department of Health and Human Services, Great Lakes Water Authority, and local health departments in the City of Detroit, as well as Wayne, Macomb, and Oakland counties.
Department: Civil and Environmental Engineering
Name: Mohammad Wasif Naqvi
Date Time: Wednesday, July 17th, 2024 - 9:00 a.m.
Advisor: Dr. Bora Cetin
Freeze-thaw action in soils, a process where soil moisture freezes and thaws, causes significant heave and settlement, leading to substantial damage to pavements and infrastructure, particularly in seasonally freezing regions. This increases maintenance costs, reduces structural integrity, and shortens roadway and other important infrastructure lifespans. In 2013 alone, U.S. state highway agencies reported spending approximately $27 billion on pavement maintenance, and freeze-thaw damage is considered one of the factors responsible for these expenses. Addressing this issue is essential for infrastructure durability and performance in affected areas, decreasing economic costs and improving safety. This dissertation explores an innovative solution known as engineered water repellency to mitigate the impacts of freeze-thaw cycles on soils. The study also investigates the impact of salt concentrations in soil caused by road deicing operations on freeze-thaw action in soils. An extensive literature review provides a comprehensive understanding of the mechanisms of frost action, its impacts on infrastructure, and existing mitigation strategies.
The research employs both experimental and large-scale testing methodologies to evaluate the efficacy of organosilane (OS) treatments in reducing frost heave and moisture migration in frost-susceptible soils by imparting water repellency to the soil. A novel large-scale soil test box simulates realistic environmental conditions, providing valuable insights into the freeze-thaw action in soil and the practical application of OS treatments. Results from the study demonstrate that OS treatments significantly mitigate frost heave and improve soil stability by reducing moisture migration. Specifically, OS-treated soils showed a reduction in maximum soil heave by up to 96% and water migration by up to 97% compared to untreated soils. The large-scale test box, which provided controlled yet realistic top-down freezing conditions, revealed that treated soils maintained higher minimum temperatures and lower moisture content above the hydrophobic layer thereby reducing the heave monitored at 0.15 m depth. However, the importance of integrating proper drainage systems was highlighted to prevent excessive moisture accumulation and ensure the effectiveness of water-repellency treatments in real-world applications.
The present study also investigates the effects of varying sodium chloride (NaCl) concentrations on freeze-thaw behavior, revealing that higher salt levels effectively lower the freezing point, reduce heave rates, and decrease water intake. The study emphasizes the importance of simulating realistic temperature gradients to understand the effect of salt concentration on freeze-thaw behavior in soils. For instance, soils with 5% NaCl concentration showed significant freezing point depression and reduced heave rates to 11.3 mm/day (ASTM) and 1.5 mm/day (low-temperature gradient) from 22.5 mm/day and 17.2 mm/day, respectively, in the control. Additionally, salt treatments effectively decreased moisture content and water migration, with the highest salt concentration demonstrating the most substantial reductions. However, salt migrates toward the freezing front, increasing soil salt concentrations in the upper layers.
An economic analysis using life cycle cost analysis (LCCA) confirmed that engineered water repellency is a cost-effective long-term solution compared to traditional methods. While initial costs might be higher, the lower equivalent uniform annual costs (EUAC) and net present values (NPV) of OS treatments make them economically viable over the long term. These findings collectively advance the understanding of soil behavior under freeze-thaw conditions and propose practical, economically viable strategies for improving infrastructure resilience in cold climates. Future research should focus on field validations and long-term monitoring to refine these strategies and ensure their effectiveness across diverse environmental conditions.
Department: Civil and Environmental Engineering
Name: Huy Dang
Date Time: Monday, July 15th, 2024 - 2:00 p.m.
Advisor: Dr. Yadu Pokhrel
Dams are some of the most important man-made structures that provide significant benefits to societies by mitigating floods and droughts while supporting irrigation, domestic or industrial water supply, and power generation. However, global attention on the detrimental ramifications of dam operations has increased owing to the observed irreversible environmental impacts of existing dams in over-developed regions. Despite these concerns, the growing demands for energy and water in developing regions have led to a boom in the construction of large dams in recent years with hundreds more planned in the near future. Additionally, the construction and operation of dams in these regions are often based on localized, incomplete, or inconsistent observation-based hydrologic analyses, rendering them less effective in mitigating hazard risks. Simultaneously, climate change is intensifying flood and drought events, making them less predictable and more destructive, especially in developing regions. Thus, there is an urgent need for in-depth investigation of past changes as well as future uncertainties in hydrology of these regions under the compound impact of climate change and dam operations.
This dissertation addresses these critical issues by employing a high-resolution river-floodplain-reservoir model called the CaMa-Flood-Dam (CMFD), that realistically accounts for hydropower and irrigation dam operations. Model simulations are used to quantify the changes in river regime and flood dynamics in the Mekong River Basin (MRB). First, analyses of an important subbasin with unique hydrological features in the MRB, the Tonle Sap, are conducted to provide a comprehensive assessment on the alteration of the Tonle Sap Lake, Southeast Asia largest lake. Then, key insights are presented on the evolving river regime and flood pulse of the entire MRB over 83 years, focusing on the difference between climate and dam impacts on seasonal timing and water balance. Finally, potential changes in river regime and extremes across the MRB under multiple combinations of future climate and planned dam development are explored. The key findings from the aforementioned analyses are: (1) Mekong river flow’s trends and variabilities of are still mainly driven by climate variation, however, dam operations have exerted a growing influence on the Mekong flood pulse especially after 2010; (2) dams are causing a gradual shrinkage of the Tonle Sap lake by reducing its annual inflow from the Mekong mainstream; (3) dams are delaying the Mekong’s wet season onset and shortening its duration; (4) dams have largely altered the Lower Mekong flood occurrence by shifting substantial volume of water between the seasons; and (5) in the future, dams will notably increase dry season flow.
The results in this dissertation provide major advances and important insights on the integrated river-floodplain-reservoir dynamics in the MRB and paving pathways towards a more sustainable development based on the understanding of the continually changing hydrological systems in the region. Furthermore, this assessment could benefit future investigations in other developing regions worldwide where dam construction is similarly booming.
Department: Civil and Environmental Engineering
Name: Celso Santos
Date Time: Wednesday, July 10th, 2024 - 12:00 p.m.
Advisor: Dr. Bora Cetin
The long-term performance of pavement depends on the complex geomechanical properties of the unbound materials used in the construction of the pavement foundation. When the pavement is subjected to cyclic stresses, the stress is transmitted downward through the aggregates that compose the different layers (i.e., base, subbase, and subgrade). The materials’ properties such gradation, density, plasticity index, moisture sensitivity, aggregate shape, stiffness (resilient modulus (MR)), and drainage capacity are crucial qualities that contribute to drainage, stress dissipation and protect the pavement from distresses such as cracking and rutting. For instance, a subgrade layer composed of expansive clay undergoes significant volume changes in response to variations in moisture content. Consequently, it exerts powerful pressures on the pavement structure, leading to uplift during wet periods and settlement during dry periods.
The base and subbase layers protect the subgrade from excessive traffic loads while facilitating pavement drainage. Ideally, natural aggregate is used in the construction of pavement foundations. However, due to the high cost, environmental impact, and scarcity of natural aggregates, recycled concrete aggregate (RCA) have been used as an alternative. The crushed properties of RCA offer superior mechanical benefits, such as high stiffness, compared to natural aggregate (GM). However, the presence of unhydrated cement and cement mortar in RCA can affect the long-term performance of pavement and drainage properties, potentially causing significant distress. While RCA is a stiffer and more sustainable option, its properties are not fully understood. Additionally, there is still a lack of consensus on the effect of geomaterial index properties on the geomechanical properties of both RCA and natural aggregates used in the construction of pavement foundation layers.
To address these issues, several base (RCAs and GMs) and subgrade unbound materials with different index properties were collected from various roadway sections under construction in Michigan. An extensive evaluation was conducted to understand how their index properties affect: 1) the stress-strain response of subgrade (i.e., sand and clay) and base (i.e., RCA and GM) unbound materials; 2) the hydraulic properties (i.e., hydraulic conductivity, water content and matric suction relationship); and 3) the time required to drain 50% of a saturated base layer. The stress-strain response of sandy and fine unbound subgrade soils was evaluated using the NCHRP and Shakedown concepts. Based on their gradation and plasticity index, the materials showed stress-hardening, stress-hardening followed by stress-softening, and stress-softening. Further analysis was conducted to understand the confining pressure and stress dependency of these materials. To study the effect of index properties on RCA and GM, principal component analysis (PCA) was employed for dimensionality reduction and to identify patterns within the dataset. Based on the PCA results, six materials were selected, and a model was developed to estimate laboratory resilient modulus results using falling weight deflectometer (FWD) field tests. Additionally, the hydraulic properties and time-to-drain properties of the base materials were evaluated for further understand the impact of material properties on a base layer performance and their unsaturated properties. The findings led to several recommendations for materials used in designing sustainable and long-life pavement. Detailed discussions of the results are provided in the following chapters.
Department: Civil and Environmental Engineering
Name: Augusto Masiero Gil
Date Time: Tuesday, July 2nd, 2024 - 1:00 p.m.
Advisor: N/A
Fire represents a significant hazard to bridges, often resulting in damage or collapse of structural members. Typically, bridge fires result from crashes or overturns of vehicles carrying large amounts of flammable materials near bridges. These fires have become a growing concern over the last decade due to increasing urbanization and transportation of hazardous materials. Characterized by the rapid onset of very high temperatures (above 1000°C), these fires significantly affect the stability and integrity of structural members. Despite these risks, current bridge codes and standards do not specify any fire safety features in the design and construction of bridges, leaving critical transportation infrastructure vulnerable to fire hazard.
While there has been some research in recent years on the fire response of steel and composite bridges, there have been no studies that addressed the fire problem in concrete bridges. Further, prestressed concrete girders, designed with slender cross-sections to reduce self-weight and span longer distances, can experience faster degradation during fire exposure due to rapid temperature propagation within the girder cross-section. Although conventional concrete members have good fire response properties, newer concrete types such as High-Strength Concrete (HSC) and Ultra-High Performance Concrete (UHPC) experience faster degradation of mechanical properties at elevated temperatures and are also more susceptible to fire-induced spalling.
To address some of the identified knowledge gaps, experimental and numerical studies on the fire response of concrete bridge girders have been carried out. As part of the experimental work, pore pressure measurements in concrete at elevated temperatures were conducted to evaluate the mechanisms that lead to fire-induced spalling in concrete. Also, shear strength tests were carried out to assess the degradation of shear strength with temperature in UHPC. Complementing the experimental studies, a comprehensive finite element-based numerical model was developed to trace the response of concrete bridge girders under fire conditions. The model accounts for varying fire scenarios, loading conditions, and temperature-dependent thermal and mechanical properties of steel and concrete, and was validated with data from fire tests. To develop typical bridge fire scenarios, fire dynamics simulations were carried out and incorporated into the model.
A set of parametric studies were undertaken to evaluate the effect of critical parameters on the fire response of concrete bridge girders. Results demonstrate that smaller concrete sections present lower fire resistance due to their lower thermal mass, and that I-shaped concrete girders are susceptible to shear failure from the high temperatures in their webs. Other design parameters, such as span length and concrete strength, also significantly affect the fire performance of concrete bridges. In addition, fire simulations have shown that bridge fires present high severity and are influenced by the bridge geometrical features. Based on these findings, recommendations to improve the fire design of bridge girders have been proposed. For conventional concrete bridge girders, increasing cross-sectional size and limiting exposure of the web to the high temperatures can improve fire performance. Internal pressure and spalling can be reduced in UHPC members through addition of polypropylene fibers. Additionally, parameters for assessing the fire resistance of bridge girders, such as failure criteria and a bridge fire curve that accounts for the thermal gradient along the girder length were proposed. The developed numerical tool is also applied to analyze the fire-induced collapse of the I-95 overpass in Philadelphia on June 11, 2023.
Keywords: Concrete bridges, Fire safety, Bridge girders, Ultra-high performance concrete
Department: Civil and Environmental Engineering
Name: Peng Chen
Date Time: Thursday, June 27th, 2024 - 10:00 a.m.
Advisor: Dr. Karim Chatti and Dr. Bora Cetin
Accurately predicting strain responses under axle loadings is crucial for the design of flexible pavements using the mechanistic-empirical approach, especially within the prevalent Pavement ME methodology. These strains are directly used in pavement damage calculation and predicting distresses. The stiffness of the top layer of flexible pavement, asphalt concrete (AC), is influenced by both loading frequency and temperature due to its viscoelastic nature. Typically, the mechanistic behavior of AC is characterized by the dynamic modulus (E*) master curve, derived from laboratory tests under uniaxial sinusoidal loadings. While a full dynamic viscoelastic analysis can precisely predict critical strains, it is computationally demanding. Consequently, Pavement ME employs a layered linear-elastic analysis, relying on the concept of "equivalent loading frequency" to determine the elastic modulus of the AC layer under specific axle loadings. However, this method has limitations in accurately predicting critical strains within the AC layer.
This thesis introduces two novel frequency calculation methods: the "centroid of PSD" and the "equivalent frequency." The former computes frequency based on the weighted center of Power Spectral Density (PSD) of vertical stress pulses induced by axle loadings, while the latter iteratively adjusts frequency until it matches strains computed by dynamic viscoelastic analysis under moving loads. The accuracy of these methods, alongside the Pavement ME method, is evaluated against dynamic viscoelastic analysis results under moving loads.
Findings reveal that while Pavement ME underestimates surface strains, it provides reasonable predictions with increasing depth for single and multiple axle configurations. Differences in loading frequencies between axle configurations are highlighted, and a correction method based on pulse width and equivalent frequency is proposed. Finally, both the original and corrected frequencies are implemented in MEAPA software to predict long-term pavement distress for real projects in Michigan. The results show that the difference between bottom-up fatigue cracking predicted by the original and corrected Pavement ME frequencies is negligible. The corrected frequency yields higher rutting predictions compared to the original Pavement ME method, ranging from approximately 15% to over 20% for AC rutting and 5% to 10% for total rutting, depending on pavement structures and traffic volumes.
Department: Civil and Environmental Engineering
Name: Brijen Miyani
Date Time: Sunday, April 22nd, 2024 - 1:00 p.m.
Advisor: Dr. Irene Xagoraraki
The recent COVID-19 pandemic has highlighted the importance of wastewater-based-epidemiology (WBE) methods to effectively monitor and predict infectious viral disease outbreaks. Traditional disease detection systems rely on identification of infectious agents by diagnostic analysis of clinical samples, often after an outbreak has been established. Those surveillance systems are lacking in their ability to predict outbreaks, since it is impossible to test every individual in a community for all potential viral infections that may be emerging. Untreated wastewater may serve as a community-based sample that can be tested to identify the diversity of endemic and emerging human viruses prevalent in the community. WBE can help reduce the load of medical systems, guide clinical testing, and provide early warnings. This dissertation presents innovative screening tools based on molecular methods, high throughput sequencing, and bioinformatics analysis that can be applied in the analysis of wastewater samples to identify viral diversity in the corresponding catchment community. Further, population biomarker methods were developed to normalize the signals. The first chapter of the dissertation focuses on an application of a bioinformatics-based screening tool to reveal high abundance of rare human herpesvirus 8 in Detroit wastewater. The second chapter focuses on early warning of the COVID-19 second wave in Detroit MI. The third chapter focuses on surveillance of SARS-CoV-2 in nine neighborhood sewersheds in Detroit Tri-County area, United States and assessing per capita SARS-CoV-2 estimations and COVID-19 incidence. The fourth chapter uses molecular method to identify a wide variety of human viruses in Trujillo-Peru wastewater and confirms COVID-19, monkeypox, and diarrheal disease outbreaks. The fifth chapter reveals signals of polio 1 and 3 detected in municipal wastewater in Trujillo-Peru and discusses the implications of positives results in communities.
Department: Civil and Environmental Engineering
Name: Hao Dong
Date Time: Thursday, April 4th, 2024 - 1:30 p.m.
Advisor: Dr. Kristen Cetin
In the United States, the residential and commercial sectors have consumed increasingly more energy over the past 70 years. As the U.S. shifts towards a carbon-neutral electric grid, electrification using fossil fuel-free, renewable energy resources such as wind and solar will help to reduce greenhouse gas (GHG) emissions. To reduce the need for fossil fuels and utilize energy more efficiently, technologies and policies are introduced to help decrease the demand-side intensity of building sectors. Three issues are addressed in this research to support the goals of smart buildings or net energy-zero buildings (NEZB) to achieve human comfort and demand-side management (DSM): sensing technology sensitivity for smart building controls, occupants’ patterns and correlations in residential buildings, and appliance use in residential buildings.
First, there has been a lack of studies and guidance on the appropriate placement of various sensors within a building and how this sensor placement impacts building control performance. This research thus first investigates (i) how sensitive controls of buildings are to sensor placement, in particular, sensor location and orientation. Sensor placement impact analysis helps to investigate the impact on energy use and demand for an integrated lighting and shading control system. Second, various studies have shown that occupancy-related factors in energy modeling can create significant differences in building energy consumption. Human-related factors, especially occupants’ activities and behavior, are less well understood, especially in the wake of lifestyle changes that have occurred as a result of the pandemic. This research thus (ii) assesses and quantifies the changes to occupancy patterns and the relationship to the socioeconomic factors that have occurred due to the COVID-19 pandemic. Finally, the third topic focuses on demand-side management (DSM), which enables the ability to control the quantity and timing of electricity consumption. Approximately one-third of this consumption is from large appliances, many of which are occupancy-driven loads. Historically, energy use information for estimating the energy use of individual appliances has originated from a combination of field-collected and simulated data. However, this data originates from sources assessing pre-pandemic energy consumption patterns, thus there is a need to (iii) assess how energy use patterns of appliances have changed during and post-pandemic. This research thus helps to estimate demand reduction opportunities from the use of appliances in DSM applications.
Department: Civil and Environmental Engineering
Name: Soham Vanage
Date Time: November 15, 2023 - 3:30 pm
Advisor: Kristen Cetin
IMPROVING ENERGY USE, DEMAND AND VISUAL COMFORT IN COMMERICAL BUILDINGS USING LIGHTING AND SHADING CONTROLS
Windows provide occupants with natural light and a view of the outside, enhancing productivity, which is important as people spend approximately 90% of their time indoors. This is especially the case during and after the COVID-19 pandemic. Automated controls for window shading systems can be used to control solar radiation and daylight entering the space. Lighting controls can reduce lighting requirements, providing energy savings and better visual comfort for occupants than manual controls, which are seldom used effectively.
Past studies have explored automated lighting and shading control strategies, and reported energy savings and visual comfort improvements over their baselines. However, the assumptions for baseline models differ across different studies, making it difficult to compare these automated controls. Thus, this research uses a multi-step modeling process, including daylighting and energy simulations using RADIANCE and EnergyPlus, respectively (i) to compare existing control strategies using the same building inputs (baseline model) for a prototypical small office building, (ii) to develop and evaluate the effectiveness of a novel integrated control strategy that uses variables such as occupancy, HVAC state, solar radiation entering the space, time of day for control, and others variables. (iii) to develop a parametric model to investgate the impact of different input variables such as building form factor, window-to-wall ratio for all different orientations, shade properties such as opennes s factor, and shade overhang depth on energy performance and visual comfort.
On top of improving energy efficiency and visual comfort in buildings, managing demand at the grid level is becoming more important as renewable energy gets added to the generation mix. Instead of adding more generation to balance the grid, usually using new fossil fuel-based generation, the other approach to balance the grid is to use existing building loads and reduce their demand during specific hours (also known as demand-side Flexibility Services (FS)). As buildings become smarter with the adoption of new technologies for sensing and control, more integration between buildings and the electric grid is possible. Building loads such as air conditioning and lighting in commercial buildings have the potential to provide demand-side FS. In particular, demand-side flexibility using lighting loads is not well studied in the literature. In commercial buildings, lighting accounts for approximately 10-15% of the load at any time. Past studies have shown that lighting can be dimmed by 15-2 0% without causing visual discomfort to the occupants. The forth objective thus of theis study (iv) if to improve the existing literature by providing building level and grid level estimates for using lighting loads for all the common commercial building types as demand-side Flexible Services (FS) for three future scenarios in the Midwest region.
Department: Civil and Environmental Engineering
Name: Saeed Memari
Date Time: November 15, 2023 - 11:30 am
Advisor: Phanikumar Mantha
COMBINING REMOTE SENSING, MACHINE-LEARNING AND MECHANISTIC MODELING TO IMPROVE COASTAL HYDRODYNAMIC AND WATER QUALITY MODELING IN THE LAURENTIAN GREAT LAKES
Large lakes often serve as early indicators of shifts in the environment. Observations within the Great Lakes ecosystems continue to highlight a deterioration in water quality, a surge in algal bloom occurrences, and growing threats to indigenous species. Given the inherent complex dynamics of these inland seas and the growing environmental pressures, it is important to understand shifts in the intricate process dynamics governing these systems. Hydrodynamics and temperature, in particular, are fundamental variables that play significant roles as they influence multiple physical, chemical, and biological processes taking place within the lakes and their coastal areas. The goal of this study is to improve coastal hydrodynamic and water quality models of the Laurentian Great Lakes. Extensive field datasets were collected in Lake Huron and Lake Erie over multiple years focusing on diverse factors affecting coastal processes including the roles of oscillating, bidirectional exchange flow s between Lake Michigan and Lake Huron at the Straits of Mackinac, groundwater upwelling and submerged sinkholes in bays of Lake Huron, and contaminant plumes originating from rivers draining into the lakes. The performance of the models heavily depends on the quality of boundary forcing data and how the domain is discretized. Thus, a systematic assessment was done to improve the models by improving domain discretization through depth-adaptive triangular meshes and nested-grid methods. Additionally, detailed meteorological forcing fields were created with reanalysis and in-situ datasets. High-resolution time series data for water quality variables were generated using machine learning models since traditional monitoring data are notorious for their low temporal resolution, especially for microbiological water quality. The accuracy and performance of the models were tested against in-situ observations. This encompassed data on currents, lake levels, water temperature, and water quali ty variables (turbidity and Escherichia coli concentrations). High-resolution remote sensing imagery was also incorporated for a comprehensive evaluation of spatial plume dynamics. Novel insights from this research include an understanding of the crucial role played by the exchange flows in the Straits of Mackinac on transport timescales and biophysical processes in the bays of Lake Huron. The exchange flows significantly influence regions as far down as 50-70 km from the Straits changing, among other things, bottom currents, which have important implications for biogeochemical processes, including the resuspension of bottom sediment, nutrient availability (e.g., nitrogen and phosphorus), and the growth and sloughing events of benthic algae such as Cladophora. Observed vertical velocities close to the lake bed in Thunder Bay, Lake Huron were found to be an order of magnitude higher compared to simulated vertical velocities of the same system using models that did not explicitly acco unt for groundwater inflow from the karst lake bed. Models and data were used to estimate the upwelling groundwater flux and to quantify the impacts of ignoring groundwater in this system. The study highlights the significant benefits of merging best-available techniques and a fusion of mechanistic modeling, machine learning, and remote sensing to push the envelope of model performance in the context of the Great Lakes. This research is expected to aid management efforts aimed at enhancing and preserving the resilience of coastal regions.
Department:
Civil and Environmental Engineering
Name:
Omid Bagheri
Date Time:
Friday, October 13, 2023 - 3:00pm
Location:
1234 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Yadu Pokhrel
This dissertation investigates the intricate dynamics of hydrologic systems in the Amazon River basin (ARB) in the face of evolving climate patterns and human interventions. The ARB – a pivotal element of the global climate, hydrological, and biogeochemical systems – holds immense biodiversity and profoundly influences global water, energy, and carbon cycles. Climate variations and human activities, especially deforestation in the southern subbasins, have considerably altered the basin's functioning. Despite extensive research, critical gaps persist in understanding key hydrological processes and rainforest resilience. This research disentangles the impacts of climate and land use/land cover (LULC) changes toward devising robust resource management strategies. The dissertation employs state-of-the-art hydrological modeling, examining the pivotal role of shallow groundwater in modulating surface fluxes and potentially averting rainforest transformation. The results indicate that at least 34% of the Amazonian Forest is supported by groundwater during the dry season. This study reveals a two-month lag between seasonal peak evapotranspiration (ET) and river discharge as a crucial mechanism in preventing rainforest tipping into savanna. The ARB is dominantly energy limited; however, the results suggest that in the absence of groundwater support, and with less than ~125 mm/month of precipitation, the ARB could have become water-limited, at least in some regions. The long-term basin-averaged ET—dominated by transpiration—changed with a split pattern of ±9% in the past three decades. Similarly, water table depth (±19%) and runoff (±29%) changed with a heterogeneous patterns across the ARB. Moreover, by quantifying the impact of climate variability and LULC changes this research finds that climate variability remains the dominant influence on WTD dynamics; however, the impacts on ET varied across the basin. Runoff patterns were intricately tied to precipitation and water table dynamics, demonstrating regional variations influenced by both climate variability and LULC changes. Through a comprehensive area fraction analysis, this research identifies tipping points associated with groundwater dynamics. This study provides crucial insights on (i) the dominant hydrological processes, (ii) isolated impacts of climate variability and LULC change on the water cycle of the ARB, and (iii) tipping points in the ARB that are associated with groundwater dynamics. These findings could be used to inform effective water resource management and sustainable environmental practices in this ecologically significant region.
Department:
Civil and Environmental Engineering and Mechanical Engineering
Name:
Aref Ghaderi
Date Time:
Tuesday, August 22, 2023 - 4:00pm
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Roozbeh Darganzany
Nowadays, cross-linked elastomers play a significant role in several industries such as aerospace, construction, transportation, marine, aeronautics, and automotive due to excellent flexibility, toughness, form-ability, and versatility. During their intended service-life, the material is supposed to sustain aggressive environmental damages induced by water infusion, temperature, and solar ultraviolet radiation (UV) during their operation, which affects their durability and properties.
A reliable design of rubber components to prevent early failure by environmental degradation requires digital simulations by means of high-fidelity thermo-mechanical constitutive models that can simulate the adverse effects of aging on mechanical, electrical, thermal, and failure properties of polymers. So far, most aging models are developed by coupling hyperelastic constitutive models with single-kinetic degradation models, to demonstrate the decay of materials during aging. However, a more detailed modeling approach can be achieved through modular continuum-based damage models that integrate the finite strain theory and thermo-mechanical degradation models.
Rubber elasticity theory is driven partly based on (i) statistical mechanics at micro-scale (ii) Phenomenological Modeling at Meso-scale for modeling of the network (iii) Continuum Mechanics at Macro-scale to model the material. So, hyperelastic models fall into three main categories: the phenomenological approach, the micro-mechanical approach, and the data-driven approach.
Recently, the emergence of machine-learned (ML) models has attracted much attention. The first generation of "black-box" ML models as another type of phenomenological model was proposed to model the mechanical behavior of rubbery media.
In solid mechanics, stress–strain tensors are only partially observable in lower dimensions. Thus, obtaining data to feed a black-box ML model is exceptionally challenging. Thus, these approaches soon become obsolete due to the high demand for data for training, and the lack of constraint on their output margins.
The issue can be resolved in a new generation of ML models which is inspired by physics-informed neural networks (PINN) which infuse physics-based knowledge into the black-box models. Here, we modify PINN models to develop hybrid frameworks that can address the limitations of both phenomenological and micro-mechanical models by obtaining micro-structural behavior from the macroscopic experimental data set.
The objective of this defense is to provide a new approach for reduced-order physics-based Data-driven modeling of multi-stressor damage in elastomers by infusing Knowledge into a neural network. The following are the major thrusts of our research in the proposed dissertation:
(i) To design a systematic approach to reduce order of the constitutive mapping and address the data volume problem for training.
(ii) To incorporate background knowledge from polymer physics, continuum mechanics, and thermodynamics into the neural networks and constraint the solution space.
(iii) To develop a neural network to predict various inelastic effects which is far less data-dependent, more interpretable than current PINN, and uses a knowledge-confined solution space.
(IV) To validate our proposed hybrid framework based on limited data to describe the relationship between elastomeric network mechanics and environmental degradation.
To go into further detail, the model has been successfully developed and validated in five different damage scenarios which describe the evolutionary process of developing the final platform. These steps are as follows, (I) Providing a model for polymers in non-extreme environments to capture the dependence of elastomer behavior on loading conditions such as strain rate and temperature, as well as compound morphology factors such as filler percentage and crosslink density, (II) developing a model for single mechanism aging, i.e. thermal aging, or hydrolytic aging, (III) developing a model to capture accumulation damages of fatigue and thermo-aging, (IV) introducing Physics informed neural networks (PINNs) to simulate multiple stiff, and semi-stiff ODEs that govern Pyrolysis and Ablation, and (V) developing a Bayesian surrogate constitutive model to estimate failure probability of elastomers.
The models used in the proposed platform are the first hybrid models developed and validated for polymer components and thus, bring great novelty and value to the industry. The model proposed in this work can significantly improve the design process of polymeric components by predicting the reliability, durability, and performance loss of materials based on the projected mechanical and environmental loading conditions. Such knowledge can significantly reduce the design cost, reduce the number of reliability tests needed, reduce the maintenance costs and overhauls, and most importantly prevent unexpected catastrophic failures.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Civil and Environmental Engineering
Name:
Mahdi Ghazavi
Date Time:
Wednesday, May 3, 2023 - 10:00am
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Muhammed Emin Kutay
Long-life pavements are designed and built to last for over 50 years without needing major structural rehabilitation or reconstruction. Reported benefits of such pavements include low lifecycle cost, less frequent repair and/or rehabilitation, lower user-delay costs and lower environmental impact. Several approaches exist to design long-life pavements, all of which are based on mechanistic-empirical principles. While designing long-life pavements, deep structural distresses (e.g., bottom-up cracking) are designed to never develop, by limiting the maximum critical stresses and strains. Only surficial distresses (e.g., top-down cracking, rutting etc.) are allowed to occur, but they are managed via periodic maintenances (e.g., mill and overlay). Several states in the US have built long-life pavements by enhancing structural design methods, using better materials, improving specifications and construction practices. In Michigan, four pilot long-life pavement sections were constructed between 2017 and 2019; two rigid and two flexible pavements. Each pilot project included a long-life and an accompanying standard (control) section constructed on the same highway. Modifications to standard designs and materials were made to extend their service life. The focus of this dissertation is on the two flexible projects. The scope of the study included as-built evaluation of these pilot long-life projects to determine their potential for meeting the intended design and service lives. MDOT performed numerous field tests and collected material samples from these projects. Extensive analysis of the field data and numerous laboratory tests were conducted to characterize the material properties. As-constructed material properties were used in different mechanistic-empirical (ME) design software to estimate the expected performance of all the pilot projects. Based on the detailed laboratory and field testing and the mechanistic-empirical performance predictions, recommendations were made in structural design, material selection, construction, and quality control and quality assurance procedures. The main objective of this study is to perform a thorough analysis of the pilot flexible long-life projects which were designed based on state-of-the-practice methods and enhance the mechanistic-empirical design of these pavements and propose alternative design approach for long life pavements to potentially reduce life cycle cost and improve their performance.
Department:
Civil and Environmental Engineering
Name:
Mumtahin Hasnat
Date Time:
Monday, April 24, 2023 - 12:00pm
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Muhammed Emin Kutay
The Michigan Department of Transportation (MDOT) has been using the Distress Index (DI) since the inception of its pavement management system (PMS) in the early 1990s. DI was developed to help MDOT engineers in their decision-making process, budget allocation, and prioritization for future maintenance or reconstruction activities. However, the raw data requirements for the DI are complicated (and somewhat unique compared to the rest of the nation) and MDOT has been having difficulty in finding vendors to collect PMS data. Over the last three decades, the pavement industry has seen many advances in data collection, distress identification, performance modelling, and other processes fundamental to PMSs. Consequently, there is a need to revisit the DI used by MDOT and revise it according to modern pavement data collection standards and calculation methodology. The objective of this study was to develop an enhanced pavement condition score and associated PMS data collection methodology for use by MDOT. To meet this objective, 2081 flexible and 741 rigid pavement sections were selected from MDOT's performance database. Then five different condition indices used by other state agencies were computed using MDOT's PMS data and compared them against MDOT's Distress Index (DI). The results were presented through statistical analysis and scatter plots. Maintenace records were used to compare the magnitudes of different indices right before maintenance activities were performed. The new pavement condition parameter was selected to follow the current state of the practice it its rating scale and consider major distresses. The developed new condition parameter is backward compatible using MDOT's historical pavement management data. Moreover, while developing the new pavement condition index, important criteria such as policy sensitivity, ease of understanding, usefulness in decision-making were considered. Furthermore, various performance models were used to predict the new condition index and International Roughness Index (IRI) data and pavement fix lives were estimated for both asphalt and rigid pavements.
Department: Computational Mathematics, Science and Engineering
Name: Joey Bonitati
Date Time: Friday, August 16th, 2024 - 12:00 p.m.
Advisor: Dean Lee
This thesis investigates quantum algorithms for eigenstate preparation, with a primary focus on solving eigenvalue problems such as the Schrödinger equation by utilizing near-term quantum computing devices. These problems are ubiquitous in several scientific fields, but more accurate solutions are specifically needed as a prerequisite for many quantum simulation tasks. To address this, we establish three methods in detail: quantum adiabatic evolution with optimal control, the Rodeo Algorithm, and the Variational Rodeo Algorithm.
The first method explored is adiabatic evolution, a technique that prepares quantum states by simulating a quantum system that evolves slowly over time. The adiabatic theorem can be used to ensure that the system remains in an eigenstate throughout the process, but its implementation can often be infeasible on current quantum computing hardware. We employ a unique approach using optimal control to create custom gate operations for superconducting qubits and demonstrate the algorithm on a two-qubit IBM cloud quantum computing device.
We then explore an alternative to adiabatic evolution, the Rodeo Algorithm, which offers a different approach to eigenstate preparation by using a controlled quantum evolution that selectively filters out undesired components in the wave function stored on a quantum register. We show results suggesting that this method can be effective in preparing eigenstates, but its practicality is predicated on the preparation of an initial state that has significant overlap with the desired eigenstate. To address this, we introduce the novel Variational Rodeo Algorithm, which replaces the initialization step with dynamic optimization of quantum circuit parameters to increase the success probability of the Rodeo Algorithm. The added flexibility compensates for instances in which the original algorithm can be unsuccessful, allowing for better scalability.
This research seeks to contribute to a deeper understanding of how quantum algorithms can be employed to attain efficient and accurate solutions to eigenvalue problems. The overarching goal is to present ideas that can be used to improve understanding of nuclear physics by providing potential quantum and classical techniques that can aid in tasks such as the theoretical description of nuclear structures and the simulation of nuclear reactions.
Department: Computational Mathematics, Science, and Engineering
Name: Tianyu Yang
Date Time: Friday, April 12th, 2024 - 1:00 p.m.
Advisor: Yang Yang
Ultrasound modulated bioluminescence tomography (UMBLT) is a technique for imaging the 3D distribution of biological objects such as tumors by using a bioluminescent source as a biomedical indicator. It uses bioluminescence tomography (BLT) with a series of perturbations caused by acoustic vibrations. UMBLT outperforms BLT in terms of spatial resolution. The current UMBLT algorithm in the transport regime requires measurement at every boundary point in all directions, and reconstruction is computationally expensive. In this talk, we will first introduce the UMBLT model in both the diffusive and transport regimes, and then formulate the image reconstruction problem as an inverse source problem using internal data. Second, we present an improved UMBLT algorithm for isotropic sources in the transport regime. Third, we generalize an existing UMBLT algorithm in the diffusive regime to the partial data case and quantify the error caused by uncertainties in the prescribed optical coefficients.
Email sandra@msu.edu for Zoom information
Department:
Computational Mathematics, Science and Engineering
Name:
He Lyu
Date Time:
Friday, May 19, 2023 - 10:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Rongrong Wang
In the fields of statistical and machine learning, one frequently encounters the task of analyzing high-dimensional data. Since high dimensionality poses a great challenge to traditional methods, new methods that are specifically designed for high-dimensional data have been developed. A promising approach to tackle the curse of dimensionality is to make prior assumptions on the data. In this dissertation, we focus on the low-intrinsic-dimensionality prior of the data, which assumes that the high-dimensional data lies around a low-dimensional manifold. In the special case when the manifold is a linear subspace, this prior reduces to the standard low-rank prior. The low-rank assumption underlies many popular statistical and machine learning algorithms, such as Principal Component Analysis and Singular Value Hard Thresholding.
The defense presentation will consist of two parts. The first part will explore the robustness of reconstruction under the low-rank prior for various applications. In particular, we analyze the fundamental perturbation problem of Singular Value Decomposition (SVD). Due to the significant importance of SVD in data science and its sensitivity to noise, studying its stability is crucial for the reliability of many machine learning algorithms that involve SVD. We establish a useful set of formulae for the sinΘ distance between the original and the perturbed singular subspaces. Following this, we further derive a collection of new results on SVD perturbation related problems.
In the second part, we employ the low-rank prior for manifold denoising problems. Specifically, we generalize the Robust PCA (RPCA) method to manifold setting and propose an optimization framework that separates the sparse component from the manifold under noisy data. It is worth noting that in this work, we generalize the low-rank prior to a more general form to accommodate data with a more complex structure, instead of assuming the data itself lies in a low-dimensional subspace as in RPCA, we assume the clean data is distributed around a low-dimensional manifold. Therefore, if we consider a local neighborhood, the sub-matrix will be approximately low rank. Theoretical error bounds are provided when the tangent spaces of the manifold satisfy certain incoherence conditions. And the efficacy of our method is demonstrated on both synthetic and real datasets.
Department: Computer Science and Engineering
Name: Nicholas Polanco
Date Time: Thursday, December 5th, 2024 - 11:00 a.m.
Advisor: Betty H.C. Cheng
The increase of inward-facing and outward-facing communication used by modern vehicles with automated features expands the breadth and depth of automotive cybersecurity vulnerabilities. Furthermore, because of the prominent role that human behavior plays in the lifetime of a vehicle, social and human-based factors must be considered in tandem with the technical factors when addressing cybersecurity. A focus on informing and enabling stakeholders and their corresponding actions will promote security of the vehicle through a human-focused approach. The diverse stakeholders and their interactions with a modern day vehicle cover a spectrum of vulnerabilities that need to be secured. Example stakeholders include the consumer using the vehicle, the technicians working on the car, and the engineers designing the software. Stakeholder-aware strategies can be applied in both a social and technical manner to increase preventative security measures for autonomous vehicles. By leveraging theoretical foundations from the criminology domain, we create reusable social and technical stakeholder-based solutions applicable to the vehicle and its supporting infrastructures, that can be used by different stakeholders interacting with the vehicle. In this dissertation, we take an interdisciplinary approach to address automotive cybersecurity where we synergistically combine cybercrime theory, human factors, and technical solutions to develop reusable prevention and detection techniques.
Department: Computer Science and Engineering
Name: Vishal Asnani
Date Time: Tuesday, November 26th, 2024 - 8:30 a.m.
Advisor: Dr. Xiaoming Liu
Adversarial attacks in computer vision typically exploit vulnerabilities in deep learning models, generating deceptive inputs that can lead AI systems to incorrect decisions. However, proactive schemes approaches designed to embed purposeful signals into visual data can serve as “adversarial attacks for social good,” harnessing similar principles to enhance the robustness, security, and interpretability of AI systems. This research explores application of proactive schemes in computer vision, diverging from conventional passive methods by embedding auxiliary signals known as "templates" into input data, fundamentally improving model performance, attribution capabilities, and detection accuracy across diverse tasks. This includes novel techniques for image manipulation detection and localization, which introduce learned templates to accurately identify and pinpoint alterations made by multiple, previously unseen Generative Models (GMs). The Manipulation Localization Proactive scheme (MaLP), for example, not only detects but also localizes specific pixel changes caused by manipulations, showing resilient performance across a broad range of GMs. Extending this approach, the Proactive Object Detection (PrObeD) scheme utilizes encoder-decoder architectures to embed task-specific templates within images, enhancing the efficacy of object detectors, even under challenging conditions like camouflaged environments.
This research further expands proactive schemes into generative models and video analysis, enabling attribution and action detection solutions. ProMark, for instance, introduces a novel attribution framework by embedding imperceptible watermarks within training data, allowing generated images to be traced back to specific training concepts—such as objects, motifs, or styles—while preserving image quality. Building on ProMark, CustomMark offers selective and efficient concept attribution, allowing artists to opt into watermarking specific styles and easily add new styles over time, without the need to retrain the entire model. Inspired by the proactive structure of PrObeD for 2D object detection, PiVoT introduces a video-based proactive wrapper that enhance action recognition and spatio-temporal action detection. By integrating action-specific templates through a template-enhanced Low-Rank Adaptation (LoRA) framework, PiVoT seamlessly augments various action detectors, preserving computational efficiency while significantly boosting detection performance. Lastly, the thesis presents a model parsing framework that estimates "fingerprints” for the generative models, extracting unique characteristics from generated images to predict the architecture and loss functions of underlying networks—a particularly valuable tool for deepfake detection and model attribution. Collectively, these proactive schemes offer significant advancements over passive methods, establishing robust, accurate, and generalizable solutions for diverse computer vision challenges. By addressing key issues related to the different vision applications caused by conventional passive approaches, this research lays the groundwork for a future where proactive frameworks can improve AI-driven applications.
Department: Computer Science and Engineering
Name: Shivangi Yadav
Date Time: Friday, November 8th, 2024 - 10:30 a.m.
Advisor: Dr. Arun Ross
Synthetic biometric data – such as fingerprints, face, iris and speech – can overcome some of the limitations associated with the use of real data in biometric systems. The focus of this work is on the iris biometric. Current methods for generating synthetic irides and ocular images have limitations in terms of quality, realism, intra-class diversity and uniqueness. Different methods are proposed in this thesis to overcome these issues while evaluating the utility of synthetic data for two biometric tasks: iris matching and presentation attack (PA) detection.
Two types of synthetic iris images are generated: (1) partially synthetic and (2) fully synthetic. The goal of “partial synthesis” is to introduce controlled variations in real data. This can be particularly useful in scenarios where real data are limited, imbalanced, or lack specific variations. We present three different techniques to generate partially synthetic iris data: one that leverages the classical Relativistic Average Standard Generative Adversarial Network (RaSGAN), a novel Cyclic Image Translation Generative Adversarial Network (CIT-GAN) and a novel Multi-domain Image Translative Diffusion StyleGAN (MID-StyleGAN). While RaSGAN can generate realistic looking iris images, this method is not scalable to multiple domains (such as generating different types of PAs). To overcome this limitation, we propose CIT-GAN that generates iris images using multi-domain style transfer. To further address the issue of quality imbalance across different domains, we develop MID-StyleGAN that exploits the stable and superior generative power of diffusion based StyleGAN. The goal of “full synthesis” is to generate iris images with both inter and intra-class variations. In this regard, we propose two novel architectures, viz., iWarpGAN and IT-diffGAN. The proposed iWarpGAN focuses on generating iris images that are different from the identities in the training data using two transformation pathways: (1) Identity Transformation and (2) Style Transformation. On the other hand, IT-diffGAN projects input images onto the latent space of a diffusion GAN, identifying and manipulating the features most relevant to identity and style. By adjusting these features in the latent space, IT-diffGAN generates new identities while preserving image realism.
A number of experiments are conducted using multiple iris and ocular datasets in order to evaluate the quality, realism, uniqueness, and utility of the synthetic images generated using the aforementioned techniques. An extensive analysis conveys the benefits and the limitations of each technique. In summary, this thesis advances the state of the art in iris and ocular synthesis by leveraging the prowess of GANs and Diffusion Models.
Department: Computer Science and Engineering
Name: Ira Woodring
Date Time: Tuesday, October 29th, 2024 - 12:00 p.m.
Advisor: Dr. Charles Owen
Unified Modeling Language (UML) Class Diagramming is the commonly accepted mechanism used to describe relationships between software components. In addition, it is an essential educational tool that is used to convey the structure of software and the patterns of software design to students. Unfortunately, UML is a visual-only mechanism and therefore is not useful for developers and students who are blind or have visual impairments. This work describes a method for conveying class diagrams using audio, which addresses this lack of a tool to support these populations. This method works by rigidly dividing the views of a diagram into smaller spaces. Elements in these subspaces are conveyed through manipulation of audio properties. Multiple user studies were performed to prove that the tool is viable for conveying the static structure of software elements and that the workload required to use the tool is not too high. The results of the studies indicate that the tool is effective and requires only a slightly higher workload than traditional class diagrams.
Department: Computer Science and Engineering
Name: Aryan Tanmay Gupta
Date Time: Friday, October 11th, 2024 - 1:00 p.m.
Advisor: Dr. Sandeep Kulkarni
We currently see a steady rise in the usage and size of multiprocessor systems, and so the community is evermore interested in developing fast parallel processing algorithms. However, most algorithms require a synchronization mechanism, which is costly in terms of computational resources and time.
If an algorithm can be executed in asynchrony, then it can use all the available computation power, and the nodes can execute without being scheduled or locked. However, to show that an algorithm guarantees convergence in asynchrony, we need to generate the entire global state transition graph and check for the absence of cycles. This takes time exponential in the size of the global state space.
In this dissertation, we present a theory that explains the necessary and sufficient properties of a multiprocessor algorithm that guarantees convergence even without synchronization. We develop algorithms for various problems that do not require synchronization. Additionally, we show for several existing algorithms that they can be executed without any synchronization mechanism.
A significant theoretical benefit of our work is in proving that an algorithm can converge even in asynchrony. Our theory implies that we can make such conclusions about an algorithm, by only showing that the local state transition graph of a computing node forms a partial order, rather than generating the entire global state space and determining the absence of cycles in it. Thus, the complexity of rendering such proofs, formal or social, is phenomenally reduced.
Experiments show a significant reduction in time taken to converge, when we compare the execution time of algorithms in the literature versus the algorithms that we design. We get similar results when we run an algorithm, that guarantees convergence in asynchrony, under a scheduler versus in asynchrony. These results include some important practical benefits of our work.
Department: Computer Science and Engineering
Name: Hongzhi Wen
Date Time: Tuesday, August 6th, 2024 - 9:30 a.m.
Advisor: Dr. Jiliang Tang
The rapid advancement of single-cell technologies allows for simultaneous measurement of multiple molecular features within individual cells, providing unprecedented multimodal data through single-cell multi-omics and spatial omics technologies. This thesis addresses the complex challenges of modeling these multimodal interactions using deep learning techniques. We propose two series of studies: the first, scMoGNN and scMoFormer explores the application of graph transformers to model relations between multimodal features, incorporating external domain knowledge; the second, SpaFormer proposes a transformer-based framework for spatial transcriptomic data to extract cell context information. Despite the effectiveness of these models, their knowledge transferability across tasks and datasets remains limited. To overcome this, we introduce a new transformer-based foundation model, CellPLM, that encodes inter-cellular relations and multimodal features, demonstrating the significant potential for future research in single-cell biology.
Department: Computer Science and Engineering
Name: Shengjie Zhu
Date Time: Wednesday, July 31st, 2024 - 10:00 a.m.
Advisor: Dr. Xiaoming Liu
Recovering structure and motion from videos is a well-studied comprehensive 3D vision task that involves (1) image calibration, (2) two-view pose initialization, and (3) multi-view Structure-from-Motion (SfM). Prior arts are optimization-based methods built over sparse image correspondence inputs. This thesis develops systematic approaches to enhance classic solutions with deep learning models. We introduce EdgeDepth and PMatch for dense monocular depthmaps and dense binocular correspondence map estimations. Since classic approaches typically rely on sparse and accurate inputs, they are less suitable for the dense yet high-variance predictions from dense depth and correspondence models. As a solution, we propose to optimize through the robust inlier-counting-based scoring function, which is widely applied in RANdom SAmpling Consensus (RANSAC). (1) For image calibration, we introduce WildCamera. The system utilizes a RANSAC algorithm applied to a dense incidence field regressed by a deep model. It calibrates in-the-wild monocular images without checkerboard. (2) In two-view pose estimation, we introduce LightedDepth. It estimates the optimal pose by aligning the depth map with the correspondence map, maximizing the projective inliers. (3) The strategy is extended to a Hough Transform in RSfM for multi-view SfM over a local 3 to 9 frame system. Finally, we generalize the RSfM Hough Transform to the cumulative distribution function loss for large-scale SfM task. To this end, we formulate a comprehensive system that recovers structure and motion from two-view / local multi-view / large-scale multi-view images with dense monocular depthmap and binocular correspondence maps. Compared to prior arts, our methods show improved accuracy at two-view / local multi-view systems and show on-par accuracy at large-scale multi-view systems.
Department: Computer Science and Engineering
Name: Salman Ali
Date Time: Thursday, July 25th, 2024 - 2:00 p.m.
Advisor: Dr. Wolfgang Banzhaf
Complex supply chains such as the 'food supply chain' network involve diverse subsystems like stock management, feed harvesting, cold storage transportation and retail businesses. Throughout the food supply chain, major subsystems are owned by private organizations which inhibits sharing of potentially useful common information. This results in a lack of trust, traceability and a lost opportunity to share knowledge and optimize the chain for better economic and environmental outcomes.
Bringing together dispersed and disjoint supply chain participants to collaborate on common applications beyond the 'point-of-sale' communication channel comes with numerous technological and data restriction challenges, which necessitates the need for a generic, scalable and user-controlled collaboration framework.
This thesis takes on the challenge of learning common knowledge in disjoint and dispersed supply chains by proposing a decentralized and distributed supply chain connectivity and collaboration framework controlled and run by chain participants.
Using an example of the 'Beef Supply Chain', several useful applications including carbon emissions tracking, supply chain optimization and collaborative machine learning applications using secure data pipelines are presented. Through practical applications and system evaluation, the efficacy of the proposed framework is demonstrated for collaboration, policy sharing, traceability, federated machine learning, knowledge transfer and increased value for supply chain participants.
Department: Computer Science and Engineering
Name: Wentao Bao
Date Time: Thursday, July 18th, 2024 - 2:00 p.m.
Advisor: Dr. Yu Kong
Though we have witnessed waves of success in visual intelligence, teaching machines to understand visual content at the level of human intelligence remains a fundamental challenge. In past decades, visual understanding has been extensively explored through computer vision tasks such as object (or activity) recognition, segmentation, and detection. However, existing methods can hardly be deployed in real open-world applications, where unseen environments, objects, and activities inevitably appear in testing. Such a limitation is attributed to the closed-world assumption that ignores the unknown in model design, learning, and evaluation.
In this dissertation defense, I will introduce my works that go beyond the traditional closed-world visual understanding and tackle several challenging open-world problems. The goal is to endow machines with visual perception capabilities in an open world, where unseen environments, image objects, and video activities will be tackled. First, I will investigate open-world visual forecasting problems in an unseen perception environment, such as autonomous driving and virtual reality. Specifically, we are interested in how the early observed videos can be leveraged to promptly forecast the traffic accident risk for safe self-driving, and predict the 3D hand motion trajectory in an unseen first-person view. Second, I will cover the open-world visual recognition problems that aim to identify the unseen visual concepts. In this part, I am interested in identifying and localizing unseen video activities such as human actions in general videos. Lastly, I will delve into open-world visual language understanding problems that further recognize unseen visual concepts from language queries. Specifically, we are interested in understanding unseen compositional objects in images and spatiotemporally detecting unseen human actions.
Department: Computer Science and Engineering
Name: Austin Ferguson
Date Time: Friday, June 28th, 2024 - 3:00 p.m.
Advisor: Dr. Charles Ofria
While evolution has created a stunning diversity of complex traits in nature, isolating the details for how a particular trait evolved remains challenging. Specifically, what were the critical events in evolutionary history that made the particular trait more or less likely to arise? We must consider historical contingency, where even small changes, such as an apparently neutral mutation, can have substantial influence on long-term evolutionary outcomes. Evolutionary biologists have long been interested in the role that historical contingency plays in evolution, but testing hypotheses of its effects has traditionally been difficult and time consuming, if it is even possible at all.
Here I leverage the speed and power of digital evolution to experimentally test the role of historical contingency in evolution. I start by observing how the evolution of phenotypic plasticity stabilizes future evolutionary dynamics. Next, I employ analytic replay experiments to empirically test which mutations in a population’s history increased the likelihood that associative learning evolves, first as case studies and then using more statistically powerful experimental approaches. I demonstrate that single mutations can drastically increases the odds of learning appearing, shifting it from a rare possibility to a near inevitability, and I find that these “potentiating” mutations exist in all studied lineages. Finally, I use potentiating mutations to develop an intuitive view into how adaptive momentum increases evolutionary exploration in populations experiencing disequilibrium.
We are only beginning to scratch the surface of how historical contingency influences evolution, but digital evolution systems can expedite this process by testing these hypotheses and further refining these techniques for use in natural organisms. This work, and those like it, are pivotal in understanding how populations previously evolved, how their accumulated history currently affects them, and how they might evolve far into the future.
Department: Computer Science and Engineering
Name: Oyendrila Dobe
Date Time: Friday, June 21st, 2024 - 11:00 a.m.
Advisor: Dr. Borzoo Bonakdarpour
Formal verification ensures the correctness of systems with respect to user-specified requirements. My research explores the different aspects involved in verification, by model checking, of systems described at an abstract level as Markov models, against hyperproperties expressed in HyperPCTL. We represent systems as Markov models due to their flexibility in modeling uncertainty (in terms of nondeterminism, randomization, and partial observability), and their simplicity in using the current state to determine the future evolution of the system. HyperPCTL allows the expression of probabilistic hyperproperties. In general, hyperproperties are system-level requirements that can express properties related to security, privacy, robustness, efficiency, etc. Prominent examples include noninterference of secret inputs on publicly observable outputs, observational determinism of public outputs, optimal path planning in robotics, individual fairness in models, side-channel timing attacks, and conformance of different system versions.
Given this combination of model and properties, we extend the previously proposed logic HyperPCTL to express specifications involving nondeterminism and rewards, study the complexity of the general model checking problem for this logic, and propose constraint-based algorithms for the same, in a tool called HyperProb. The high complexity of this problem has further motivated our research on the development of fragment-specific algorithms that scale better and approximate statistical-based model-checking algorithms that extend the existing prominent tool PLASMA. We have further explored the parameter synthesis problem where assuming that a HyperPCTL property holds in a model, we synthesize valid values for unknown parameters in our models. Overall, my talk describes our research efforts in advancing the state-of-the-art in quantitative model checking of probabilistic hyperproperties in Markov models.
Department: Computer Science and Engineering
Name: Han Xu
Date Time: Monday, June 3rd, 2024 - 12:00 p.m.
Advisor: Jiliang Tang
When machine learning (ML) and artificial intelligence (AI) are applied in safety-critical tasks, such as autonomous vehicles or financial fraud detection, their reliability, especially under adversarial attacks, has become increasingly important. In order to enhance ML safety, it is essential to develop sound solutions for (1) identifying adversarial examples to uncover the weaknesses of models and (2) building robust models that can resist adversarial examples. In this talk, we will introduce some of our recent research findings in both directions. On the attack side, we will delve into our proposed attack algorithm that can achieve high efficiency and optimality, particularly in the discrete data domain, such as text data. On the defense side, we will address one important but frequently ignored weakness of adversarial training (one of the most popular strategies to improve model robustness), known as the “bias issue” of adversarial training. Motivated by these new findings and methodologies, we will also discuss potential future research directions as well as the social impacts of these research problems.
Department: Computer Science and Engineering
Name: Asadullah Hill Galib
Date Time: Thursday, May 30th, 2024 - 12:00 p.m.
Advisor: Pang-Ning Tan
The accurate modeling of extreme values in time series data is a critical yet challenging task that has garnered significant interest in recent years. The impact of extreme events on human and natural systems underscores the need for effective and reliable modeling methods. The proposed thesis aims to develop novel deep learning frameworks that can effectively model extreme events in time series data. The thesis introduces four novel deep learning frameworks: DeepExtrema, Self-Recover, SimEXT, and FIDE, which offer promising solutions for forecasting, imputation, representation learning, and generative modeling of extreme values in time series data. DeepExtrema focuses on integrating extreme value theory with deep learning formulation to improve the accuracy and reliability of extreme events forecasting. Self-Recover addresses data fusion challenges that arise from varying temporal coverage associated with long-term and random missing values of predictors. SimEXT explores how deep learning can be utilized to learn useful time series representations that effectively capture tail distributions for modeling extreme events. FIDE introduces a high-frequency inflation-based conditional diffusion model tailored towards preserving extreme value distributions within generative modeling. These frameworks are evaluated using real-world and synthetic datasets, demonstrating superior performance over existing state-of-the-art methods. The contributions of this research are significant in advancing the field of time series modeling and have practical implications across various domains, such as climate science, finance, and engineering.
Department: Computer Science and Engineering
Name: Hanqing Guo
Date Time: Tuesday, May 14th, 2024 - 9:30 a.m.
Advisor: Dr. Li Xiao
Voice, as a primary way for people to communicate with each other and interact with computers/smart devices, is expected to be secure and private when people use it. However, recent studies demonstrated the vulnerabilities of using voice to talk with people; conduct speaker authentication and deliver messages to smart devices. For example, the eavesdropper can record the conversation; the adversary can playback the speaker’s sound to attack the speaker authentication model; the hacker can craft fake speech to damage the reputation of the victim or launch impersonation scam; furthermore, the attacker can perform an adversarial voice attack to control the victim’s smart devices. This talk aims to understand the root cause of the vulnerabilities, address the challenges of achieving private and secure voice communication, and explore future directions to completely resolve the security concerns of the AI-enabled voice models and systems.
Department: Computer Science and Engineering
Name: Guangjing Wang
Date Time: Monday, May 13th, 2024 - 1:00 p.m.
Advisor: Dr. Qiben Yan
In the realm of the Internet of Things (IoT), users, devices, and environments communicate and interact with each other, creating a web of complex interactions. This interconnected web of interactions makes the IoT a powerful tool for enhancing human experiences. However, it simultaneously presents substantial challenges in ensuring security and privacy amid interactions among users, devices, and environments.
This dissertation investigates potential IoT interaction security and privacy issues by customizing data-centric AI algorithms. First, this dissertation studies complex interactions in smart homes where many interconnected smart devices are deployed. A graph learning-based threat detection system is designed to discover potential interactive threats across multiple smart home platforms. Second, considering smart home data privacy and data heterogeneity issues, a dynamic clustering-based federated graph learning framework is proposed to collaboratively train a threat detection model. Meanwhile, a Monte Carlo beam search-based method is designed to identify the interactive threat causes. Third, we explore the privacy issues behind the interactions between users and smartphones. Specifically, a potential bio-information leakage attack channel has been identified that utilizes near-ultrasound signals from a smartphone to recognize facial expressions based on a contrastive attention learning model. Fourth, we reveal two critical overprivileged issues in mobile activity sensing data generated from interactions between users and mobile devices: metadata-level and feature-level overprivileged issues. Correspondingly, we design the multi-grained data generation model to reconstruct mobile activity sensing data, so as to mitigate the privacy concerns behind the mobile sensing overprivileged issues.
We have implemented and extensively evaluated the proposed threat detection model, federated model training method, acoustic-based expression recognition model, and privacy-preserving data reconstruction model in practical settings. This dissertation concludes with a discussion of future work. We highlight the potential challenges and opportunities associated with the applied AI techniques for addressing security and privacy issues in the IoT. This dissertation points out the pathway for future research in enhancing security and privacy to safeguard the interactions among users, devices, AI, and environments.
Department: Computer Science and Engineering
Name: Hossein Rajaby Faghihi
Date Time: Tuesday, May 7th, 2024 - 11:00 a.m.
Advisor: Dr. Parisa Kordjamshidi
Reasoning over procedural text, which encompasses texts such as recipes, manuals, and 'how-to' tutorials, presents formidable challenges due to the dynamic nature of the world it describes. These challenges are embodied in tasks such as 1) tracking entities and their status changes (entity tracking) and 2) summarizing the process (procedural summarization).
This thesis aims to enhance the representation and reasoning over textual procedures by harnessing semantic structures in the input text and imposing constraints on the models' output. It delves into using semantic structures derived from the text, including relationships between actions and objects, semantic parsing of instructions, and the sequential structure of actions. Additionally, the thesis investigates the integration of structural and semantic constraints within neural models, resulting in coherent and consistent outputs that align with external knowledge. The thesis contributes significantly to three main areas: Entity tracking, Procedural Abstraction, and the Integration of constraints in deep learning.
In the entity tracking task, four primary contributions are made.
1) the development of a novel architecture that effectively encodes the flow of events within pretrained language models,
2) Seamless transfer learning from diverse corpora through task reformulation,
3) the enhancement of language models' by incorporating knowledge extracted from semantic parsers and leveraging ontological abstraction of actions, and
4) Creating a new evaluation scheme considering fine-grained semantics in tracking entities.
Regarding procedural summarization, the thesis proposes a model for an explicit latent space for the procedure that is indirectly supervised to ensure the summary's action order corresponds to the order of events in the multi-modal instructions.
In the realm of integrating domain knowledge with deep neural networks, the thesis makes two significant contributions,
1) it contributes to the development of a generic framework that facilitates the incorporation of first-order logical constraints in neural models, and
2) it creates a new benchmark for evaluating constraint integration methods across five categories of tasks. This benchmark introduces novel evaluation criteria and offers valuable insights into the effectiveness of constraint integration methods across various tasks
Department: Computer Science and Engineering
Name: Abdullah Alperen
Date Time: Friday, May 3rd, 2024 - 10:00 a.m.
Advisor: Dr. Hasan Metin Aktulga
Sparse matrix computations comprise the core component of a broad base of scientific applications in fields ranging from molecular dynamics and nuclear physics to data mining and signal processing. Among sparse matrix computations, the eigenvalue problem has a significant place due to its common use in the area of high performance scientific computing. In nuclear physics simulations, for example, one of the most challenging problems is solving large-scale eigenvalue problems arising from nuclear structure calculations. Numerous iterative algorithms have been developed to solve this problem over the years.
Lanczos and locally optimal block preconditioned conjugate gradient (LOBPCG) are two of such popular iterative eigensolvers. Together, they present a good mix of the computational motifs encountered in sparse solvers. With this work, we describe our efforts to accelerate large-scale sparse eigensolvers by employing asynchronous runtime systems, the development of hybrid algorithms and the utilization of GPU resources.
We first evaluate three task-parallel programming models, OpenMP, HPX and Regent, for Lanczos and LOBPCG. We demonstrate these asynchronous frameworks’ merit on two architectures, Intel Broadwell (a multicore processor) and AMD EPYC (a modern manycore processor). We achieve up to an order of magnitude improvement both in execution time and cache performance.
We then examine and compare a few iterative methods for solving large-scale eigenvalue problems arising from nuclear structure calculations. In particular, besides Lanczos and LOBPCG, we discuss the possibility of using block Lanczos method and the residual minimization method accelerated by direct inversion of iterative subspace (RMM-DIIS). We show that RMM-DIIS can be effectively combined with either block Lanczos and LOBPCG to yield a hybrid eigensolver that has several desirable properties.
We finally demonstrate the challenges posed by the emergence of accelerator-based computer architectures to achieve high performance for large-scale sparse computations. We particularly focus on the scalability of sparse matrix vector multiplication (SpMV) and sparse matrix multi-vector multiplication (SpMM) kernels of Lanczos and LOBPCG. We scale their performance up to hundreds of GPUs by improving their computation and communication aspect through hand-optimized CUDA kernels and hybrid communication methods.
Department: Computer Science and Engineering
Name: Steven Grosz
Date Time: Thursday, April 11th, 2024 - 1:30 p.m.
Advisor: Dr. Anil Jain
Fingerprint recognition is a long-standing and important topic in computer vision and pattern recognition research, supported by its diverse applications in real-world scenarios such as access control, consumer products, national identity, and border security. Recent advances in deep learning have greatly enhanced fingerprint recognition accuracy and efficiency alongside traditional hand-crafted fingerprint recognition methods, particularly in controlled settings. While state-of-the-art fingerprint recognition methods excel in controlled scenarios, like rolled fingerprint recognition, their performance tends to drop in uncontrolled settings, such as latent and contactless fingerprint recognition. These scenarios are often characterized by extreme degradations and image variations in the captured images. This performance drop is due to the inability of fingerprint embeddings (feature vectors obtained via deep networks) to generalize across variations in the captured fingerprint images between varying controlled and uncontrolled settings.
The challenges in the generalization of fingerprint embeddings, from controlled to uncontrolled settings, encompass issues such as insufficient labeled data, varying domain characteristics (often referred to as “domain gap"), and the misalignment of fingerprint features due to information loss. This thesis proposes a series of methods aimed at addressing these challenges in various unconstrained fingerprint recognition scenarios. We begin in chapter 2 with an examination of cross-sensor and cross-material presentation attack detection (PAD), where the sensing mechanism and encountered presentation attack instruments (PA) may be unknown. We present methods to augment the given training data to include a wider diversity of possible domain characteristics, while simultaneously encouraging the learning of domain-invariant representations. Next, we turn our attention in chapter 3 to the challenging scenario of contact to contactless fingerprint matching, where misaligned fingerprint features due to differences in contrast, perspective differences, and non-linear distortions are corrected via a series of deep learning-based preprocessing techniques to minimize the domain gap between contact and corresponding contactless fingerprint images. In chapter 4, we aim to improve the sensor-interoperability of fingerprint recognition by leveraging a diversity of deep learning representations, integrating convolutional neural network and attention-based vision transformer architectures into a single, multimodel embedding. Similarly, in chapter 5, we further improve the robustness and universality of fingerprint representations by fusing multiple local and global embeddings and demonstrate a marked improvement in latent to rolled fingerprint recognition performance, both in terms of accuracy and efficiency. Next, chapter 6 presents a method for synthetic fingerprint generation, capable of mimicking the distribution of real (i.e., bona fide) and PA (i.e., spoof) fingerprint images, to alleviate the lack of publicly available data for building robust fingerprint presentation attack detection algorithms. Finally, in chapter 7 we extend our fingerprint generation capabilities toward generating universal fingerprints of any fingerprint class, acquisition type, sensor domain, and quality, all to improve fingerprint recognition training and generalization performance across diverse scenarios.
Department: Computer Science and Engineering
Name: Declan McClintock
Date Time: Monday, April 8th, 2024 - 10:00 a.m.
Advisor: Dr. Charles Owen
Serious games research shows that games can increase engagement and improve learning outcomes over traditional instruction, but the impact of specific elements of serious games has yet to be fully explored across many contexts. Additionally, many existing intervention studies omit the details of the game design and development theory that informed the creation of the games used in the study. This abandons an important level of context surrounding why the games were successful and does a disservice to the field by not propagating useful design theory.
Two issues with existing game design theories are that they do not build fully on top of each other and that they leave out practical guidelines for their use in the design and development processes. This leads to further limiting the spread of useful design theory and limiting its impacts in industry and academia. The work in this thesis carefully outlines the influence of existing game design theory on the design and development of a game project built to study the impact of the narrative element of serious games. Additionally, this thesis builds a new framework aimed at being more comprehensive, easily built on top of, and with clear practical guidelines for its use. The main study in this thesis studies the engagement of students playing a single serious game with a cohesive narrative compared against multiple games without a narrative tying those games together. These two cases covered the same set of learning content and differ only in their narratives. The results suggest that either approach is likely to have the same results on engagement but that there is merit to explore learning outcomes further.
This study’s research is supported by design research explaining the design theory behind the games developed for and used in the experiment as well as more specific details of the games’ production. This allows the results to be understood within a larger serious game design and development context that will help inform future work. Additionally, this thesis expands on the lessons learned from the design research and criticisms of existing frameworks to produce the Iterative Game Design and Development framework (IGDD). IGDD provides a broader framework for game design and development with guidelines for its application in practice. The IGDD framework also provides an explanation for how it should be modified and built off of to both allow it to be used across many contexts and to allow future theory building to build collaboratively on top of previous works rather than adjacent to and in assumed competition with other design theory.
Department: Computer Science and Engineering
Name: Junwen Chen
Date Time: Thursday, April 4th, 2024 - 2:00 p.m.
Advisor: Yu Kong
Action recognition is a crucial aspect of video understanding, with considerable progress being made in studies based on curated short video clips. However, in real-world scenarios, videos are often long-form and untrimmed, providing continuous surveillance of our surroundings. Unfortunately, progress in action recognition for long-form videos lags behind. Unlike short-term videos that concentrate on a single action, the primary challenge in long-form videos lies in understanding multiple actions/events within the footage to perform complex reasoning.
In this thesis, I will introduce my research endeavors in developing models to comprehend long-form videos. The first part of the thesis delves into perceiving the rich dynamics in long-form videos. My research seeks to learn fine-grained motion representation across multiple actions/events over a long-horizon range, by exploiting the potential of multi-modal context. The second part focuses on leveraging the long-range dependencies of the events in boosting temporal reasoning downstream tasks. Finally, considering the wide applications of video models, we work on cultivating trustworthiness in the models for long-form videos from static bias mitigation and interpretable reasoning perspectives.
Department: Computer Science and Engineering
Name: Guangyue Xu
Date Time: Thursday, February 15th, 2024 - 12:00 p.m.
Advisor: Parisa Kordjamshidi
Humans learn concepts in a grounded and compositional manner. Such compositional and grounding abilities enable humans to understand an endless variety of scenarios and expressions. Although deep learning models have pushed performance to new limits on many Natural Language Processing and Computer Vision tasks, we still have a lack of knowledge about how these models process compositional structures and their potential to accomplish human-like meaning composition. The goal of this thesis is to advance the current compositional generalization research on both the evaluation and design of the learning models. In this direction, we make the following contributions.
Firstly, we introduce a transductive learning method to utilize the unlabeled data for learning the distribution of both seen and novel compositions. Moreover, we utilize the cross-attention mechanism to align and ground the linguistic concepts into specific regions of the image to tackle the grounding challenge.
Secondly, we develop a new prompting technique for compositional learning by considering the interaction between element concepts. In our proposed technique called GIPCOL, we construct a textual input that contains rich compositional information when prompting the foundation vision-language model. We use the CLIP model as the pre-trained backbone vision-language model and improve its compositional zero-shot learning ability with our novel soft-prompting approach.
Thirdly, since retrieval plays a critical role in human learning, our work studies how retrieval can help compositional learning. We propose MetaReVision which is a new retrieval-enhanced meta-learning model to address the visually grounded compositional concept learning problem.
Finally, we evaluate the large generative vision and language models in solving compositional zero-shot learning within the in-context learning framework. We highlight their shortcomings and propose retriever and ranker modules to improve their performance in addressing this challenging problem.
Department: Computer Science and Engineering
Name: Iliya Miralavy
Date Time: Thursday, December 14th, 2023 - 9:00 a.m.
Advisor: Dr. Wolfgang Banzhaf
Space, while inherent to the natural world, often finds itself omitted in bio-inspired computational system designs. Spatial Genetic Programming (SGP) is a Genetic Programming (GP) paradigm that incorporates space as a fundamental dimension, evolving alongside Linear Genetic Programming (LGP) programs. In SGP, each individual model is represented by a 2D space consisting of one-to-many LGP programs. These programs execute in an order influenced by their spatial position. The contribution of this work is multi-fold: It begins with introducing SGP as a tool for studying evolution of space in GP. Then it applies the proposed system on a various range of problems including Symbolic Regression, Classic Control and Decision-Making problems comparing it with other common GP paradigms. It also focuses on how the spatial dimension influences generational diversity, the emergence of spatially induced localization within the system, and the emergence of iterative structures within the system. The findings of this research open new avenues towards better understanding natural evolution and how the dimension of space could be useful as a handle for controlling important aspects of evolution.
Department: Computer Science and Engineering
Name: Roshanak Mirzaee Mazrae
Date Time: Monday, December 11th, 2023 - 12:30 p.m.
Advisor: Parisa Kordjamshdi
Spatial language understanding plays an essential role in human communication and perception of the physical world. It encompasses how people describe, understand, and communicate spatial relationships between objects and environmental entities, such as location, orientation, distance, and relative position. Spatial language processing presents numerous challenges, which often stem from the inherent ambiguity of natural language in describing spatial relations or the complexity of spatial reasoning to infer indirect relations, in particular, when multi-hop reasoning is needed. This thesis has four main contributions to learning and reasoning over spatial language.
The first contribution is proposing novel question-answering benchmarks to evaluate the spatial reasoning capability of deep neural models. These benchmarks include complex and realistic spatial phenomena not covered in previous work, making it more challenging for state-of-the-art language models (LM). The second contribution is an approach to generate a large distant supervision for spatial question answering and spatial role labeling tasks. We design grammar and reasoning rules to automatically generate a spatial description of scenes and corresponding QA pairs. In this approach, we integrate a diverse set of spatial relation types and expressions, complemented by additional functions, to enhance the flexibility and extensibility of the data generation process. Further training LMs on this data significantly improves their capability on spatial understanding, thereby enabling them to solve other benchmarks and external datasets better.
Furthermore, the third contribution explores the potential benefits of disentangling the processes of information extraction and reasoning in neural models to address the challenges of multi-hop spatial reasoning. To explore this, we design various models that disentangle extraction and reasoning (either symbolic or neural) and compare them with state-of-the-art baselines with no explicit design for these parts. Our experimental results consistently demonstrate the efficacy of disentangling, showcasing its ability to enhance models’ generalizability within realistic data domains.
Ultimately, the fourth contribution probes the role and impact of Large Language Models (LLMs) in spatial reasoning tasks. We evaluate the spatial reasoning capabilities of LLMs with and without in-context learning. In another approach, we integrate LLMs as the extraction module within the pipeline of extraction and symbolic reasoning. Our case studies and previous research on controlled environments demonstrate that incorporating LLMs in this pipeline can yield significant benefits. However, our experiments reveal that the intricacies of spatial language in real-world settings make the pipeline model inefficient, primarily due to escalating errors in the extraction process. We further explore the utilization of probabilistic logical reasoning and LLMs’ commonsense knowledge in real-world settings. These methods improve the model by providing comprehensive rules and relations that deterministic reasoning and the custom-designed symbolic reasoning module may not have captured before. However, even with these modifications, the pipeline model continues to exhibit inferior performance compared to LLMs.
Department: Computer Science and Engineering
Name: Li Liu
Date Time: Monday, November 27th, 2023 - 11:00 a.m.
Advisor: Zhichao Cao
Low-power Artificial Intelligence of Things(AIoT) Systems
Internet-of-Things (IoT) is another excellent innovation after the Internet and mobile networks in the information era, aiming at connecting billions of end-devices across scales. A multitude of IoT applications often operate under conditions of constrained energy resources, which has rendered low-power IoT systems a subject of considerable research interest. The increasing need for AI in complex scenario-based composite tasks has led to the rise of Artificial Intelligence of Things(AIoT), which encompasses research in two major directions: AI for IoT that solves problems in IoT systems with AI techniques and IoT for AI that adopts IoT infrastructure/data to advance the development of AI models. While AIoT systems in low-power scenarios offer significant benefits, they also face specific challenges that are inherent to their design and operational requirements.
This dissertation delves into low-power AIoT from both angles. 1) We endeavor to harness the capabilities of AI to predict and analyze the communication channels of dynamic long links in LoRaWAN, which is one of the Low-power Wide-area Networks(LPWANs). DeepLoRa adopts Deep Neural Networks based on Bi-directional LSTM(Long-Short-Time-Memory) to capture the sequential information of environmental influence on LoRa link performances for accurate LoRa link path-loss estimation. It reduces the path-loss estimation error to less than 4 dB, which is 2x smaller than state-of-the-art models. LoSee extends the contributions of DeepLoRa. It measures the real-world fine-grained performance, including detailed coverage study and feasibility analysis of fingerprint-based localization, of a self-deployed LoRaWAN system with temporal dynamics and spatial dynamics. 2) We design energy-efficient IoT systems that facilitate the deployment of AI models for practical applications. FaceTouch enables accurate face touch detection with a multimodal wearable system consisting of an inertial sensor on the wrist and a novel vibration sensor on the finger. We leverage a cascading classification model, including simple filters and a DNN, to significantly extend the battery life while keeping a high recall. FaceTouch achieves a 93.5% F-1 score and can continuously detect face-touch events for 79 – 273 days using a small 400 mWh battery, depending on usage.
In general, this dissertation studies both theoretical and practical aspects in the field of low-power AIoT systems, including LoRaWAN link behavior analysis and building practical wearable systems. These advancements not only underscore the feasibility of deploying low-power AIoT in real-world settings but also pave the way for future research and development in this domain, aiming to bridge the gap between IoT and AI for the creation of smarter, sustainable, and more efficient technologies.
Department: Computer Science and Engineering
Name: Mehmet Cagri Kaymak
Date Time: November 16th, 2023 - 10:00am
Advisor: Hasan Metin Aktulga
Molecular dynamics (MD) is a powerful computational method used to simulate the motion of atoms and molecules. MD simulations compute the evolution of a system of interacting particles by applying Newton’s equations of motion, facilitating the study of a range of physical, chemical, and biological phenomena. While quantum mechanical (QM) simulations result in accurate predictions of geometries and energies essential for studying various phenomena, the computational complexity has led to the emergence of new approaches such as classical force fields, reactive force fields, and machine learning potentials (MLPs), each offering unique trade-offs. Classical force fields offer longer simulation times due to assumptions such as static bonds and charges, which prohibit the study of reactive systems. Reactive force fields, such as ReaxFF, bridge the gap between QM methods and classical force fields by allowing dynamic bonds and charges. The improved flexibility results in a higher computat ional load and a more complex functional form that is hand-crafted by domain experts. MLPs are a more recent approach that utilize large datasets to eliminate complex functional forms, while also leveraging the vast ecosystem of machine learning frameworks for enhanced computational efficiency and ease of development.
As the number of methodologies increases, the landscape of MD methods becomes more complex, with each method bringing unique attributes and challenges in simulating molecular systems. We introduce innovative hybridization techniques aiming to leverage the strengths of multiple modeling approaches, improving predictive capabilities and computational efficiency. We introduce a hybrid modeling approach called ReaxFF/AMBER that combines the reactivity and polarization capabilities of ReaxFF with the efficiency of classical force fields, facilitating the simulation of larger reactive regions. Although ReaxFF can offer high fidelity when trained carefully, the existing parameterization tools lack the efficiency and speed essential for creating new ReaxFF parameter sets for different applications of interest. We have proposed a novel parameter optimization approach, JAX-ReaxFF, leveraging the capabilities of a scalable machine learning framework to drastically reduce the training times for ReaxFF, thus enhancing the development of new force fields for various applications. We have also modified JAX-ReaxFF to run end-to-end differentiable simulations on different architectures such as CPUs, GPUs, or TPUs with the help of JAX. JAX is a library known for high-performance numerical computing and it provides features such as automatic differentiation and optimization of Python functions. This approach also allows for improved integration with existing machine learning software infrastructure, offering enhanced flexibility and performance portability.
Lastly, we propose and compare various uncertainty quantification (UQ) methods suitable for MLPs. These methods are essential for active learning-based data generation approaches, which are crucial for training data-intensive machine learning models. While our primary focus is on MLPs, the datasets created using active learning methods could also enhance the parameterization efforts for classical and reactive force fields.
Department: Computer Science and Engineering
Name: Wentao Wang
Date Time: November 15th, 2023 - 1:00pm
Advisor: Jiliang Tang
As a prominent component of artificial intelligence (AI), machine learning (ML) techniques play a significant role in the stunning achievement obtained by AI technologies in human society. ML techniques enable computers to leverage collected data to tackle various kinds of tasks in practice. However, more and more studies reveal that the capability of a ML model will be decreased dramatically if the distribution of collected data used for training this model is imbalanced. As imbalanced data distribution is widespread in many real-world applications, improving the performance of ML models under imbalanced data distribution has attracted considerable attention.
While a growing number of related works have been proposed to make ML models learn from imbalanced data more effectively, the study on this topic is far from complete. In this thesis, we propose several studies to fill up the gaps in this direction. First, most existing data generation based works only consider the local distribution information within classes, while the global distribution is totally ignored. We demonstrate both global and local distribution information are important for producing high-quality synthetic data samples to balance the data distribution. Second, almost all existing studies assume that collected data samples are associated with noisy-free labels, and, hence, they cannot work well when annotated labels are noisy. We investigate the problem of learning from imbalanced crowdsourced labeled data and propose a novel framework as a solution with satisfactory performance. Third, currently the research on investigating the impact of imbalanced data distribution o n the robustness of ML models is rather limited. To this end, we empirically verify the adversarial training (AT) approach alone cannot bring enough robustness for ML models under imbalanced scenarios while integrating the reweighting strategy with AT can be very helpful. In addition, we also propose an effective data augmentation based framework to benefit AT under imbalanced scenarios.
Department: Computer Science and Engineering
Name: Manni Liu
Date Time: November 15th, 2023 - 11:00am
Advisor: Zhichao Cao
LOCALIZATION AND SECURITY: PUSH THE LIMIT OF IOT SYSTEM DESIGN
Internet of Things (IoT) utilizes sensors as the information source of Machine Intelligence. Its applications widely exist from Smart Home, Smart City to Wearable Healthcare and Smart Farming. An IoT architecture usually covers four stages: sensor data connection, data transmission, data processing and application model. On top of prediction precision, the interest of IoT research also includes improved efficiency, cost saving and system scalability.
In pursuit of these goals, we push the limit of IoT system design from the following three perspectives. (1) We exploit the potential of sensors of smart devices, including sensor fusion and possibility of new IoT functions. (2) We design Machine Learning models for IoT applications, including feature engineering and model selection. (3) We design and implement lightweight IoT systems for smart devices like laptops, smartphones and smart assistants, under the constraint of computation resource.
In this dissertation, we especially introduce our effort to IoT applications on localization and security. EyeLoc is a smartphone vision enabled localization system designed for large shopping malls. The results show that the 90-percentile errors of localization and heading direction are 5.97 m and 20° in a 70,000 m² mall. Patronus protects acoustic privacy from malicious secret audio recordings using the nonlinear effect of microphones. Our experiments show that only 19.7% of words protected by Patronus can be recognized by unauthorized recorders. SoundFlower is a sound source localization system for voice assistants. It can locate a user in 3D space through the wake-up command with a median error of 0.45 m.
In general, we explore the potential of diverse sensors to IoT services and build machine learning models to exploit the most information from sensor data. The applications we study are specifically about localization and security.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Jamell Anthony Dacon
Date Time:
Monday, July 3, 2023 - 1:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
Natural language processing (NLP) is a subfield of artificial intelligence (AI) and has become increasingly prominent in our everyday lives. NLP systems are now ubiquitous as they are capable of identifying offensive and abusive conversational content and hate speech detection on social media platforms, voice and speech recognition and transcription, news recommendation, dialogue systems and digital assistants, language generation, etc. Yet, the benefits of these language technologies do not accrue evenly to all of its users leading to harmful social impacts as NLP systems reproduce stereotypes or fallacious results. Most AI systems and algorithms are data driven and require natural language data upon which to be trained. Thus, data is tightly associated to the functionality of these algorithms and systems. These systems generate complex social implications i.e., displaying human-like social biases (e.g. gender bias) that induce technological marginalization and increased feelings of disenfranchisement.
Throughout this thesis, I argue that how harms arise in NLP systems and who is harmed by these biases, can only be conceptualized and understood at the intersection of NLP, justice and equity (e.g., Data Science for Social Good), and the coupled relationships between language and both social and racial hierarchies. I propose to address three questions at this intersection: (1) How can we conceptualize and quantify such aforementioned harms?; (2) How can we introduce a set of measurements to understand "bias" in NLP systems}; and (3) How can we quantitatively and qualitatively ensure "fairness" in NLP systems?}.
To address these pertinent question, we attempt differentiate the two consequences of predictive bias in NLP: (1) outcome disparities (i.e., racial bias) and (2) error disparities (i.e., poor system performance) to explicate the importance of modeling social factors of language by exploiting NLP tools to examine predictive biases of both binary gender-specific (male and female) and LGBTQIA2S+ representations, and on an English language variety, i,e., African American English (AAE). Language reflects society, ideology, cultural identity, and customs of communicators, as well as their values. Therefore, natural language data, culture and systems are intertwined with social norms.
Nevertheless, social media and online services contain rich textual information on topics surrounding ethnicity, gender identity and sexual orientation--members of the LGBTQIA2S+ community and language (e.g., AAE). This facilitates the collection of large-scale corpora to study social biases in NLP systems in hopes of reducing stigmatization, marginalization, mischaracterization, or erasure of dialectal languages and its speakers, pushing back against potentially discriminatory practices (in many cases--- discriminatory through oversight more than malice). In this thesis, I propose several studies to minimize the gaps between gender, race and NLP systems' performance within the scope of the three aforementioned questions. In order to enable in-depth conversations about what kinds of system behaviors are harmful, in what ways, to whom, and why; I will allude to three case studies, (1) Gender and Sexual Identities, Orientations and Expressions, (2) Language, Race and Culture divided into folds, and conclude with the (3) Gender, Race, Language and Social Justice referencing five of my published works accepted to top-tier conferences that engage with social factors of language, affected communities and NLP systems.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Tian Xie
Date Time:
Tuesday, June 6, 2023 - 9:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
Nowadays, the world has been mobilized. By 2021, mobile networks have connected 23.4 billion mobile devices and provided 5.3 billion users with ubiquitous mobile services. People can use the cellular network for voice and text communication, accessing the Internet, conducting monetary transactions, etc. With the development of cellular network, lots of new services continue to be added and provided by the operators.
Considering such a great amount of devices and people connected, it is very important to secure mobile networks. However, it is challenging to secure mobile networks for the complicated networks, rapidly evolving technology, a wide range of devices, and distributed nature of the network. Any vulnerability in mobile networks can threaten the entire wireless ecosystem, which is the motivation of this dissertation to conduct the security study that identifies and addresses the security vulnerabilities in the mobile networks for making it secure and dependable. In this dissertation, three studies about the most essential cellular network services (i.e., IP Multimedia Subsystem services, wireless IoT services, Internet Application Services) are included as follows.
In the study of cellular network IP Multimedia Subsystem (IMS) security, we conduct the first security study on the operational VoWi-Fi (Voice over Wi-Fi) services in three major U.S. operators’ networks using commodity devices. We disclose that current VoWi-Fi security is not bullet-proof and uncover three vulnerabilities. Two proof-of-concept attacks are devised and both of them can bypass the existing security defenses. We propose the solutions to address all discovered vulnerabilities.
In the study of wireless IoT services, we conduct the security study on both cellular and Wi-Fi IoT services. However, this dissertation only introduces our empirical security study on cellular IoT service charging over the major U.S. carriers. We discover security vulnerabilities and analyze their root causes. To assess their real-world impact, proof-of-concept attacks are devised. In the end, we analyze the challenges in addressing these vulnerabilities and develop an anti-abuse solution to mitigate attack incentives. The solution is standard-compliant and can be used immediately in practice.
In the study of Internet Application Service (IAS), we propose a novel security framework, MPKIX, designated as Mobile-assisted PKIX (Public-Key Infrastructure X.509). MPKIX secures both IAS providers and users by leveraging the broadly used PKIX services and mobile networked systems. It provides a reliable and privacy protection user verification mechanism and largely mitigates the possibility of ID theft attacks and benefits other involved parties.
In conclusion, the security research on the cellular network services can help secure mobile ecosystem, facilitate the global deployment, and head toward the secure and dependable mobile networks.
Email Vincent Mattison or Advisor for Zoom information
Department:
Computer Science and Engineering
Name:
Xiao Zhang
Date Time:
Tuesday, May 23, 2023 - 12:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Li Xiao
Optical Wireless Communication (OWC) techniques are the potential alternatives of the next generation wireless communication. These techniques, for example, VLC (visible light communication), OCC (optical camera communication), Li-Fi, FSOC (free space optical communication), and LiDAR, are increasingly deployed in our daily life. To provide fast and secure wireless services, numerous OWC approaches use LED lamps as transmitters and photo diodes or cameras to receive light signals. However, present OWC approaches are constrained by slow speeds and limited usage cases. The primary goal of this thesis is to investigate the potentials on both the transmitter and receiver sides with designed effective strategies for boosting the data rate of OWC and extending their use scenarios from indoor to outdoor, terrestrial to non-terrestrial. In this paper, we study the possibilities of various spatial-temporal dimensions from 1D to 2D to 3D to 4D for optical wireless communication and enabled optical wireless sensing. We briefly introduce them below.
1D Spatial-Temporal Optical Wireless Communication. We found that compensation symbols, which are commonly used for fine-grained dimming, are not used for data transmission in OOK-based LiFi for indoor lighting and communication. We intend to demonstrate the LiFOD framework, which is installed on commercial off-the-shelf (COTS) LiFi systems, to increase the data rate of existing Li-Fi systems. We utilize compensation symbols, which were previously only used for dimming, to carry data bits (bit patterns) for enhanced throughput.
2D Spatial-Temporal Optical Wireless Communication. In our study about camera-based OWC (i.e., optical camera communication), we first investigate 2D rolling blocks spatial diversity in the camera imaging process rather than 1D rolling strips spatial diversity for optical symbol modulation. Our proposed RainbowRow overcomes the limitation of restricted frequency responses in traditional optical camera communication. We implement a low-cost RainbowRow prototype. We handle flickering and optical signal overlapping at the transmitter, as well as the robust decoding at the commercial camera in a variety of settings.
3D Spatial-Temporal Optical Wireless Communication. When compared to existing acoustic and RF-based approaches, underwater optical wireless communication appears promising due to its broad bandwidth and extended communication range. Existing optical tags (bar/QR codes) embed data in the plane with limited symbol distance and scanning angles. U-Star first exploits passive 3D optical identification tags for underwater navigation. We model 3D spatial diversity and utilize it to increase distance of data elements in our proposed UOID tags for simple and robust underwater navigation. To adapt to harsh underwater circumstances, we develop underwater denoising algorithms with CycleGAN, CNN based relative positioning, and real-time data parsing.
3D Spatial-Temporal Optical Wireless Sensing. The fourth project considers about the optical wireless enabled hand gesture reconstructing. The vision approaches compatible with time-consuming image processing adopt low 60 Hz location sampling rate (frame rate) for real-time hand gesture recognition. In this project, we propose RoFin, which first exploits 6 temporal-spatial 2D rolling fingertips for real-time 3D reconstructing of 20-joint hand pose. RoFin designs active optical labeling for finger identification and enhances inside-frame 3D location tracking via high rolling shutter rate (5-8 KHz). These features enable great potentials for enhanced multi-user HCI, virtual writing for Parkinson suffers, etc. We implement RoFin prototypes with wearable gloves attached with low-power single-colored LED nodes and commercial cameras.
4D Spatial-Temporal Optical Wireless Integrated Communication and Sensing. In the fifth project, we explore the integrated optical wireless communication and sensing/localization in the drone network. The existing centralized radio frequency control from a base station faces mutual interference and high latency, which will cause the localization error and lacks of on-site drone-to-drone interactions. Because of its high spatial multiplexing capability, LoS secure feature, broader bandwidth, and intuitive vision manner, optical camera communication (OCC) is considered as a potential alternative for sensing and communication in drone clusters. We propose PoseFly, a 4-in-1 AI assisted optical camera communication with drone identification, on-site localization, quick-link communication, and lighting for swarming drones.
These beneficial explorations of spatial-temporal in multi-dimensions around various applications demonstrate that the optical wireless communication can be the promising option as the next generation wireless network techniques.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Emily Ribando-Gros
Date Time:
Tuesday, April 25, 2023 - 11:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
The growing emphasis on data collection and machine learning has renewed the contributions of the ubiquitous Laplace operator in shape and data analysis. Variants and simplifications of the differential geometry de Rham-Hodge Laplacian have emerged as fast and concise topological and geometric shape descriptors for complex data sets. However, choosing the appropriate type of Laplace operator depends on the application and discretization scheme, especially in the context of volumes with 2-manifold boundary where treatment of boundary conditions is crucial.
In this dissertation, we present the Boundary-Induced Graph (BIG) Laplacian, introduced using tools from Discrete Exterior Calculus (DEC), to bring the graph Laplacian and Hodge Laplacian on an equal footing for manifolds with boundary. BIG Laplacians are defined on discrete domains, accounting for appropriate normal or tangential boundary conditions. We examine the similarities and differences of the graph Laplacian, BIG Laplacian, and Hodge Laplacian through an in-depth comparison.
Furthermore, we demonstrate experimentally the conditions for convergence of BIG Laplacian eigenvalues to those of the Hodge Laplacian for elementary shapes using an Eulerian representation of 3D domains as level-set functions on regular grids. Additionally, we show that similar schemes for defining Laplacians can be used as the kinetic energy component for the Hamiltonian operator of the density of small biological molecules. The spectra of such Hamiltonians serve as useful features for machine learning tasks in drug design and density function theory advancements, offering potential implications for practical applications.
Department:
Computer Science and Engineering
Name:
Jose Guadalupe Hernandez
Date Time:
Friday, April 21, 2023 - 11:00am
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: N/A
Evolutionary algorithms provide an effective set of tools for solving complex optimization problems found in the real world. When a new evolutionary algorithm is proposed, it is typically evaluated against hand-picked test problems or a benchmark suite to demonstrate its abilities. Indeed, multiple benchmark suites exist to shine a light on the types of problems an evolutionary algorithm is effective against. Such suites, however, are limited in their ability to help us understand why an evolutionary algorithm performs the way it does. In particular, problems with complex fitness landscape topologies do not allow for an intuitive understanding of how an algorithm traverses the search space.
Here, I propose a set of low-level diagnostic tools as an alternative to benchmark suites to more precisely and intuitively measure the strengths and weaknesses of an evolutionary algorithm; each diagnostic generates a handcrafted search space topology with targeted problem characteristics (i.e., modality, deception, dimensionality, etc.). More specifically, I focus on how the set of diagnostics can be used to develop a deeper understanding of a critical component found across many evolutionary algorithms -- the selection scheme. Indeed, we find key differences among commonly used selection schemes, where these differences help identify the kinds of problems each scheme is best suited for.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Ritam Ganguly
Date Time:
Tuesday, April 18, 2023 - 12:30am
Location:
3405 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: N/A
Given the broad scale of distribution and complexity of today's system, an exhaustive model-checking algorithm is computationally costly and testing is not exhaustive enough. Runtime Verification on the other hand analyzes a developing execution, be it online or offline, of the system in order to check for the health of the system with respect to some specification. Runtime verification of distributed systems with respect to temporal specification is both critical as well as a challenging task. It is critical because it ensures the reliability of the system by detecting violations of system requirements. To guarantee the lack of violations one has to analyze every possible ordering of system events which makes it computationally expensive and hence challenging. In this dissertation, we focus on a partially synchronous distributed system, where the various components of the distributed system do not share a common global clock and a clock synchronization algorithm limits the maximum clock skew among processes to a constant. Following listed are the main contributions of this dissertation,
Department:
Computer Science and Engineering
Name:
Mohammad Hosein Khalifeh
Date Time:
Friday, April 14, 2023 - 1:30pm
Location:
3105 Engineering Building
Announcement:
ABSTRACT
Advisor: N/A
Most networks constantly change, and predicting links or recovering from link failures is crucial for maintaining a network. Distance-based graph invariants are important criteria for network maintenance. A graph mutation is a change in the edge set of a graph, and a graph gradient is the change in a graph invariant after a mutation. We present the general concepts of discrete integral and derivative for vertex and edge weighted graphs as a tool. Using the concepts, some related problems are solved more efficiently, flexibly, and simpler than the existing solutions.
Department:
Computer Science and Engineering
Name:
Vincent Ragusa
Date Time:
Tuesday, April 4, 2023 - 1:00pm
Location:
1455A BPS
Announcement:
ABSTRACT
Advisor: N/A
Evolutionary computation is a powerful optimization tool, and an invaluable test bed for population genetics. Evolutionary algorithms can become stuck on local optima, but can escape these traps by temporarily losing fitness in order to discover even higher fitness in a process called valley-crossing. Valley-crossing is fundamentally linked to the balance between the forces of selection and variation, and as such, controlling this balance is important for optimizing the efficiency of evolutionary algorithms. Nature, in contrast, is not actively optimized for performance, and yet nature seems to overcome many challenges that evolutionary algorithms do not. It is possible that nature benefits from a highly dynamic balance between selection and variation, and this constant flux helps natural populations avoid stagnation and overcome obstacles in the fitness landscape. Working with this hypothesis in mind, I investigate the nature of selection and how natural phenomena strengthen or weaken it.
I find that selection strength can be thought of as the degree to which an evolving system is dissimilar to neutral drift. This perspective opens the door to accept all phenomena that affect the strength of selection as part of a unified theory of selection that treats selection strength as an emergent property. I present a new evolutionary dynamic -- the free-for-all effect -- that is the reduction of selection strength on organisms with higher-than-average fitness. Free-for-all can result in rapid evolutionary adaption that would otherwise seem impossible, and provides an elegant explanation for punctuated equilibrium. The discovery of free-for-all highlights the importance of spatial structure in evolving populations, and has led to the design of a new evolutionary search method called super explorers. Super explorers mimic the free-for-all effect, and improve evolutionary search, while placing full control into the hands of the algorithm designer.
Department:
Civil and Environmental Engineering and Computer Science and Engineering
Name:
Hamed Bolandi
Date Time:
Wednesday, March 29, 2023 - 1:00pm
Location:
3546D Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Vishnu Boddeti
This Multidisciplinary research proposes deep neural networks to bypass the Finite Element Analysis (FEA) and predict high-resolution stress distributions on loaded steel plates with variable loading, geometries, and boundary conditions. FEA for structures has been broadly used to conduct stress analysis of various civil and mechanical engineering structures. Conventional methods, such as FEA, provide high-fidelity solutions but require solving large linear systems that can be computationally intensive.
The existing workflow for FEM applications includes: (i) modeling the geometry and its components, (ii) specifying material properties, boundary conditions, and loading, (iii) Applying mesh strategy, and (iv) stress analysis which may be time-consuming based on the complexity of the model. Instead, Deep learning (DL) techniques can generate solutions significantly faster than conventional run-time analysis. This can prove extremely valuable in real-time structural assessment applications. In this work, The Convolutional Neural network (CNN) was designed and trained to use the geometry, boundary conditions, and static load as input to predict the stress contours in intact steel plates. Furthermore, we predict high-resolution stress distributions on damaged steel plates using CNNs augmented with custom loss functions that use physics rules to bypass the need for Finite Element Analysis.
We embedded physics constraints into the loss function to enforce the model training, precisely capturing stress concentrations around the tips of various structural damage configurations. The proposed technique’s performance was compared to Finite-Element simulations using partial differential equation (PDE) solver. There is also an emerging need for the prediction of dynamic stress distribution since Catastrophic failure of structural components is often caused by lateral loads, such as earthquakes and winds. Thus, accurate predictions of dynamic stress distribution are useful during highly disruptive events to guide corrective actions. Neuro-DynaStress is proposed to predict the entire sequence of stress distribution based on Finite Element simulations using a partial differential equation (PDE) solver.
More specifically, CNN, along with the multi-head attention transformer and feature alignment, is used to extract features and capture the data’s temporal dependence. The model was designed and trained to use the geometry, boundary conditions, and sequence of loads as input and predict the sequences of high-resolution von Mises stress contours. Moreover, to increase the accuracy of dynamic stress prediction, we propose Physics Informed Neural Network (PINN). The PINN-Stress model can predict the entire sequence of stress distribution based on Finite Element simulations using a PDE solver. Using automatic differentiation, we embed a partial differential equation into a deep neural network’s loss function to incorporate information from measurements and PDEs. In order to force our model to learn the physical constraints, we minimize the violation of the equation of motion and also minimize the boundary condition violation to fully enforce the underlying PDE. The PINN-Stress model can predict the sequence of normal and shear stress distribution in almost real-time and can generalize better than the model without PINN. Our model is also able to predict von Mises stress using the von Mises equation.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Hayam Abdelrahman
Date Time:
Thursday, March 16, 2023 - 1:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
Locating neck-like features, or locally narrow parts, of a surface is crucial in various applications such as segmentation, shape analysis, path planning, and robotics. Topological methods are often utilized to find the set of shortest loops around handles and tunnels. However, there are abundant neck-like features on genus-0 shapes without any handles. While 3D geometry-aware topological approaches exist to find neck loops, their construction can be cumbersome and may even lead to unintuitive loops. Here we present two methods for efficiently computing a complete set of surface loops that are not limited to the topologically nontrivial independent loops.
In the first approach, we propose an efficient “topology-aware geometric approach” to compute the tightest loops around neck features on surfaces, including genus-0 surfaces. We use the critical points of a processed distance function as a Morse function to find both the location and evaluate the significance of possible neck-like features. Critical points of a Morse function defined on a volume provide rich topological and geometric information about the structure of the shape. Our algorithm starts with a volumetric representation of an input surface and then calculates the distance function of mesh points to the boundary surface as a Morse function. We directly create a cutting plane through each neck feature. Each resulting loop can then be tightened to form a closed geodesic representation of the neck feature.
It is known that reducing the dimension of a problem typically boosts efficiency drastically. Hence, we propose our second approach, which is a novel, efficient approach that uses the skeleton of the shape to compute such surface loops. Given a closed surface mesh, our algorithm produces a practically complete set of loops around narrow regions of the volume enclosed by or outside the surface. Moreover, as our approach accepts a 1D representation of the shape as input, it significantly simplifies and accelerates computations. In particular, the handle-type loops are found by examining a subset of the skeleton points as candidate loop centers; and tunnel-type loops are found by examining only high-valence skeleton points.
Email sandra@msu.edu for Zoom information and passcode
Department:
Computer Science and Engineering
Name:
Nikolay "Nick" Ivanov
Date Time:
Monday, February 27, 2023 - 1:00pm
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: N/A
In recent decades, we have witnessed a convergence of multiple technologies into the integrated ever-evolving Smart World ecosystem. The ongoing evolution of the Smart World is shaped by cross-technological integration, as well as the adoption of new technologies into the ecosystem. Particularly, academia and industry envision blockchain technology as one of the major new additions to the Smart World. However, the adoption of blockchain technology is impeded by three major practical challenges: security, scalability, and usability. This thesis aims at addressing these three challenges by focusing on revealing new blockchain attacks, facilitating threat mitigation in smart contracts, and introducing new trust-free applications of blockchain technology. First, this thesis addresses some security challenges of blockchain largely overlooked in existing research. We discovered six zero-day social engineering attacks in Ethereum smart contracts and propose measures to address them. Furthermore, we introduce a new attack against hardware crypto wallets, confirmed by the manufacturers of the wallets, which evades security verification by user. Second, the thesis elaborates on defending smart contracts against attacks. We design a comprehensive five-dimensional classification taxonomy of smart contract defense tools and classify 133 existing threat mitigation solutions using our taxonomy. Next, we introduce a new smart contract security testing approach called transaction encapsulation, and implement a transaction testing tool, which reveals the actual outcomes (either benign or malicious) of Ethereum transactions. Third, the thesis introduces novel practical blockchain applications that exhibit increased security, privacy, and user control compared to other distributed solutions. We propose a framework that uses a single Ethereum smart contract for enabling high-performance scalable smart contracts on the cloud. Finally, the thesis introduces a solution that uses Ethereum smart contracts for leveraging decentralized networks of WiFi hotspots with cross-domain authentication and automated QoS enforcement. We implemented and thoroughly evaluated all the proposed attacks, defenses, and frameworks thereby confirming the real-world applicability of our work. The thesis concludes with an outlook of our ongoing and future efforts to further address the practical challenges associated with the integration of blockchain into the Smart World ecosystem.
Email sandra@msu.edu for Zoom information
Department:
Computer Science and Engineering
Name:
Pedram Kheirkhah Sangdeh
Date Time:
Friday, February 10, 2023 - 2:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
The ever-increasing demands for data-hungry wireless services and rapid proliferation of wireless devices in sub-6 GHz band have pushed current wireless technologies to a breaking point, necessitating efficient and intelligent strategies to utilize scarce communication resources. This thesis aims at leveraging novel communication frameworks, artificial intelligence techniques, and synergies between them in bringing efficiency and intelligence to the next generation of wireless networks. We first propose new spectrum sharing and non-orthogonal multiple access schemes to enhance spectral efficiency, connectivity, and throughput of cellular and Wireless Local Area Networks (WLAN). We then take advantage of recent advances in artificial intelligence to reduce communication overhead of channel sounding mechanism and accelerate resource allocation in WLANs. Our learningbased solutions efficiently utilize available communication and computation resources to facilitate multi-user MIMO and OFDMA in WLANs. We finally design a communication framework for accelerating federated learning in future intelligent transportation systems, where heterogeneous capabilities and mobility of users along with limited available bandwidth for communications are huge obstacles toward making the network intelligent in a distributed manner. With the aid of a deadline-driven scheduler and asynchronous uplink multi-user MIMO, our proposed solution reduces data loss at vehicles in a dynamic vehicular environment, making a concrete step toward the practical adoption of federated learning in future transportation systems.
Department:
Computer Science and Engineering
Name:
Hossein Pirayesh
Date Time:
Monday, January 30, 2023 - 4:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: N/A
While interest in Internet of Things (IoT) applications has surged in recent years, the broad diversity in their constraints, such as power consumption, channel bandwidth, link robustness, and packet latency, still challenges state-of-the-art technologies to enable efficient and ubiquitous wireless connectivity for IoT devices in many practical scenarios. In this thesis, we study three sets of primary constraints in developing IoT networks; energy efficiency, spectral efficiency, and physical-layer security. First, this thesis introduces EE-IoT, an energy-efficient wireless communication scheme for IoT networks. EE-IoT allows low-complex non-multi-carrier IoT devices to communicate with an orthogonal frequency division multiplexing (OFDM)-based wireless localarea network (WLAN) access point (AP) at a very low sampling rate, thereby leading to a significant reduction of IoT devices’ hardware complexity and power consumption. This thesis further enables a transparent coexistence of IoT devices and legacy Wi-Fi devices. Second, to improve spectral efficiency of dense IoT networks, this thesis introduces UD-MIMO, a practical uplink distributed multiple-input multiple-output (MIMO) for WLANs, and MaLoRaGW, a first-of-itskind multi-antenna long-range (LoRa) gateway that enables multi-user MIMO (MU-MIMO) LoRa communications in both uplink and downlink. The key enablers of the proposed schemes are new co-channel interference management techniques that allow Wi-Fi APs and LoRa gateways to concurrently serve multiple users in the absence of fine-grained inter-node synchronization. Third, this thesis introduces two jamming-resilient receiver architectures to secure vehicular ad hoc networks (VANETs) and ZigBee communications against high-power, in-band constant jamming attacks. The proposed schemes leverage multi-antenna technology and new signal detection methods to suppress jamming signals and decode desired signals. This thesis provides detailed information regarding the implementation of the proposed schemes on real-world wireless testbeds and evaluates their performance in practice.
Department: Electrical and Computer Engineering
Name: Xuhui Huang
Date Time: Friday, December 11th, 2024 - 9:00 a.m.
Advisor: Professor Yiming Deng
We explore the transformative potential of artificial intelligence (AI) and deep learning to enhance Structural Health Monitoring (SHM) and Nondestructive Evaluation (NDE). This research develops a novel framework integrating transfer learning, explainable AI techniques, data augmentation using generative models, and physics-informed deep learning approaches. It addresses critical challenges such as limited labeled data, nontransparent decision-making, and adaptability to varying operational conditions. By leveraging transfer learning and domain adaptation, the model effectively transfers knowledge from numerical models to experimental data, bridging the gap between modeling and real-world conditions. On the other hand, transferring knowledge through surrogate modeling involves simplifying complex physical phenomena to enable efficient forward prediction of response signals and solve inverse problems for determining defect geometry. Applied to Motion-Induced Eddy Current Testing (MIECT), surrogate models enable real-time monitoring and adaptive responses. In particular, we utilized Gaussian Process Regression to integrate high- and low-fidelity MIECT data, improving predictive accuracy, while an auto-compensation algorithm enhances Pulsed Eddy Current (PEC) measurements by mitigating electromagnetic interference. By exploring various deep learning architectures, we demonstrate and compare their capability to accurately localize and characterize acoustic emission sources. Integrating explainable AI techniques like Class Activation Mapping (CAM) and Gradient-weighted CAM (Grad-CAM) transforms deep learning into an interpretable methodology, enhancing transparent decision-making. This dual framework of deep learning and surrogate modeling significantly advances AI applications in NDE. Together, the dual framework of deep learning and surrogate modeling provides a comprehensive approach to improve the scalability, adaptability, and reliability of NDT technologies in dynamic environments.
Department: Electrical and Computer Engineering
Name: Akash Saxena
Date Time: Friday, December 6th, 2024 - 2:00 p.m.
Advisor: Professor Erin Purcell
Intracortical neural implants (ICNTs) are a powerful tool to treat and study neurological disorders. The performance of these implants depends on successful recording and stimulation for extended periods (up to years). This requires the recorded signal to remain consistent throughout implantation, or, from the perspective of providing stimulation, the stimulation with the same parameters should exhibit similar effects. This doesn’t hold true for ICNTs; the recorded signals exhibit intra-day variability, loss of signal quality, and potential desensitization to stimulation over chronic periods. The biological tissue response is a significant factor contributing to the loss of recording quality and signal instability for intracortical neural implants at chronic time points. Neuronal death and the presence of astrocytes around the implant are quantified to measure the strength of the tissue response to the implanted electrode. The usual trend observed is increasing neuronal death, the presence of astrocytes near the implant, and the formation of a glial sheath around the implant at chronic time points. The biocompatibility of available neural implants is primarily judged based on these two metrics. These metrics have guided various designs to reduce the tissue response, lowering both astrocytic and neuronal death and density around the implant. However, the tissue response is still triggered, and signal instability remains problematic. This leads us to believe that conventional metrics alone are insufficient in guiding implant design. Other metrics must be uncovered to complete the parameter space governing the biological tissue response to neural implants. The goal of this thesis is to create computational pipelines using signal processing, image processing, and data analysis methods to (1) better understand the interaction between the tissue and neural implant, (2) uncover variables that might affect the recording quality of the implant, and (3) potentially guide future neural implant design from the perspective of gene expression, metrics of extracellular recordings, and astrocyte morphology.
Department: Electrical and Computer Engineering
Name: Hassa Banna
Date Time: Tuesday, December 3rd, 2024 - 3:15 p.m.
Advisor: Professor Wen Li
Analysis of trace-level metals in environmental samples (e.g., soil, water, and plant samples) is essential for assessing environmental quality and food safety. This dissertation reports a non-toxic, eco-friendly, and cost-effective sensing method, capable of in-situ detection of microgram per liter (µg/L) levels of heavy metal ions in plant and soil solutions using carbon-based electrodes, including carbon fiber electrodes (CFEs) and boron-doped diamond electrodes (BDDs). The electrochemical behaviors of the CFEs and BDDs were characterized by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) measurements. As proof of principle, the CFEs and BDDs were validated for sensing selected heavy metals in buffer solutions as well as in extracted plant and soil solutions using differential pulse anodic stripping voltammetry (DP-ASV). The ideal pH range for heavy metal detection was also extensively investigated and was found to be between pH 4.0 and pH 5.0. Experimental results confirm that the CFEs were able to simultaneously measure cadmium (Cd), lead (Pb), and mercury (Hg) with a limit of detection (LOD) of 2.10 µg/L in buffer solution with an effective area (Aeff) of 0.123 cm2, showcasing good selectivity and sensitivity. On the other hand, the BDD electrodes showed simultaneous measurement of these metals with an LOD of 17.34 µg/L in buffer solution with Aeff of 0.122 mm2. Besides, BDD offers precise control over the fabrication by utilizing a microfabrication facility. Overall, the integration of these sensors with a microfluidics system lays a better foundation for long-term, in-situ, and stable electrochemical analysis for aqueous environment matrices.
Department: Electrical and Computer Engineering
Name: Wesley Spain
Date Time: Monday, November 18th, 2024 - 10:30 a.m.
Advisor: Dr. John Albrecht and Dr. Matthew Hodek
IC packaging is a critical factor in emerging next generation RF and mmWave systems design. As demand for higher data bandwidth and greater device connectivity increases, methods for developing low cost and high quality RF systems in the mmWave range and beyond must be developed and improved upon. Many traditional manufacturing techniques have been iterated on to address this issue, but most run into a hard limit in terms of RF performance and the ability to miniaturize heterogeneously integrated architectures into cost effective packages.
Additive manufacturing (AM) offers emerging processes that may be used to address these issues, providing solutions that are low operating cost and flexible to a wide range of design geometries. Some high performance designs that are difficult or unavailable with traditional manufacturing techniques may be realized using AM, extending the use of more robust IC packaging to high frequency applications.
This dissertation presents engineering advancements in the field of RF and mmWave systems manufacturing through the use of AM techniques. Chip-in-Pocket (CiP) IC packaging is investigated, including the impact of printed die fill materials and interconnects on RF system performance at Ku-band. Printed die attach techniques and their effect on the reliability of printed interconnects and die leveling are explored. Finally, a processes for transferring printed RF components and packages from the printing substrate to other surfaces will be demonstrated for Ku to Ka-band components as a means to improve manufacturing reliability of systems leveraging AM components and demonstrate the efficacy of combining AM components with traditional manufacturing. Aerosol-Jet Printing (AJP) is leveraged as the main AM method for high precision RF structures including IC interconnects and vias, all the way up to full IC packages that may be applied to PCB board assemblies.
Department: Electrical and Computer Engineering
Name: Pouria Tooranjipour
Date Time: Wednesday, October 16th, 2024 - 3:00 p.m.
Advisor: Dr. Bahare Kiumarsi
This dissertation develops high-performance safe control algorithms for autonomous systems under deterministic and stochastic uncertainties. The research is divided into two main parts: deterministic and stochastic control systems.
We focus on constructing safety certificates for unknown linear and nonlinear optimal control systems in the deterministic domain. We introduce an online method to develop control barrier certificates (CBCs) that expand the domain of attraction (DoA) without compromising performance. By formulating a feasible optimization problem using a relaxed algebraic Riccati equation (ARE) for linear systems and a relaxed Hamilton-Jacobi-Bellman (HJB) equation for nonlinear systems, alongside safety constraints, we identify the maximum barrier-certified region—called safe optimal DoA—where stability and safety coexist. To address the need for complete system dynamics knowledge, we propose an online data-driven approach employing a safe off-policy reinforcement learning algorithm, which learns a safe optimal policy while using a different exploratory policy for data collection.
Building upon these results, we incorporate disturbances using the $H_{\infty}$ control framework to attenuate unknown disturbances while ensuring safety and optimality. We unify the robustness of CBCs with $H_{\infty}$ control methods to construct a robust and safe optimal DoA. A feasible optimization problem is developed using the relaxed game algebraic Riccati equation (GARE), solved iteratively via a sum-of-squares (SOS)-based safe policy iteration algorithm. To demonstrate practical applicability, we develop a LiDAR-based model predictive control (MPC) framework that incorporates control barrier functions (CBFs). We reduce computational complexity by synthesizing CBFs from clustered LiDAR data and integrating them into the MPC framework while ensuring safety and recursive feasibility. We validate this approach through simulations and experiments on a unicycle-type robot.
In the stochastic domain, we synthesize risk-aware safe optimal controllers for partially unknown linear systems under additive Gaussian noise. By utilizing Conditional Value-at-Risk (CVaR) in the one-step cost function, we account for extremely low-probability events without excessive conservatism. Safety is guaranteed with high probability by imposing chance constraints. An online data-driven quadratic programming optimization simultaneously and safely learns the unknown dynamics and controls the system, tightening safety constraints as model confidence increases. We extend this framework to a fully risk-aware MPC for chance-constrained discrete-time linear systems with process noise, incorporating CVaR in both constraints and cost function. This approach ensures constraint satisfaction and performance optimization across the spectrum of risk assessments in stochastic environments. Recursive feasibility and risk-aware exponential stability are established through theoretical analysis.
Finally, we present a data-driven risk-aware MPC framework where the mean and covariance of the noise are unknown and estimated online. We provide a computationally efficient solution to the multi-stage CVaR optimization problem using dual representations and data-driven ambiguity sets, casting it as a tractable semidefinite programming (SDP) problem. Recursive feasibility and risk-aware exponential stability are demonstrated, with numerical examples illustrating the efficacy of the proposed methods.
Overall, this dissertation addresses challenges in unknown dynamics, disturbances, risk assessment, and computational tractability, providing robust and efficient solutions for safe optimal control in both deterministic and stochastic settings.
Department: Electrical and Computer Engineering
Name: Xinda Qi
Date Time: Monday, August 5th, 2024 - 12:00 p.m.
Advisor: Dr. Xiaobo Tan
Soft robots are developed and studied for their safety and adaptability in various applications. Compared to their rigid counterparts, soft robots can use their deformable bodies to adapt to challenging environments and tolerate collisions and inaccuracies. Natural animals, due to their intrinsic softness, have become popular inspirations for many soft robots whose designs are influenced by biological structures.
Snakes, known for their adaptability and flexibility, inspire the development of limbless mobile robots for tasks in complex environments. In this work we first propose a novel pneumatic soft snake robot that uses traveling-wave deformation to navigate complex, constrained environments, such as pipeline systems. The unique pneumatic system in the modular snake robot generates traveling-wave deformation with only four independent air channels. Experimental results show good agreement with finite element modeling (FEM) predictions and demonstrate the robot's adaptability in complex pipeline systems. Additionally, a spiral-type soft snake robot is proposed for more robust locomotion in constrained environments, utilizing rotated helix-like deformation for propulsion.
Besides the locomotion in constrained environments, we develop a 3D-printed multi-material snakeskin with orthotropic frictional anisotropy, inspired by real snakeskin, to enable undulatory slithering of the robot on planar rough surfaces. This snakeskin comprises a soft base with embedded rigid scales, mimicking real snakeskin. The designs generate various frictional anisotropies that propel the robot during serpentine locomotion. Experiments show effective serpentine locomotion on artificial and outdoor surfaces like canvas and grass.
Given the complexity of the dynamic model of the snake robot's serpentine locomotion, a model-free reinforcement learning approach is chosen for integrated locomotion and navigation. We propose Back-stepping Experience Replay (BER) to enhance learning efficiency in systems with approximate reversibility, reducing the need for complex reward shaping. BER is used in the soft snake robot's locomotion and navigation task, with a dynamic simulator assessing the algorithms' effectiveness. The robot achieves a 100% success rate in learning, reaching random targets 48% faster than the best baseline approach.
In addition to mobile robots, bio-inspired soft robots have been proposed for robotic manipulators, enabling safe and robust interactions with humans and delicate objects. Inspired by octopus tentacles, we design a multi-section cable-driven soft robotic arm with novel kinematic modeling. An analytical static model captures the interaction between the actuation cable and the soft silicone body, and in particular, the transversal deformation effect. Experiments show that the soft robotic arm has high flexibility and a large workspace, and that the proposed model outperforms a baseline model in robot behavior prediction and open-loop tracking control.
Department: Electrical and Computer Engineering
Name: Jitendra Thapa
Date Time: Monday, July 29th, 2024 - 10:00 a.m.
Advisor: Dr. Mohammed Ben-Idris
Traditional power systems are transitioning toward more sustainable electricity generation and supply systems. One of the major contributors toward this transition is the increased penetration of renewable energy resources which help to promote clean energy production, diversify the energy mix, reduce carbon emissions, and so on. However, the trend of increasing renewable energy resources has started to disrupt the conventional paradigm of power system operations. Therefore, modern electric utilities are concerned and looking for solutions to integrate these resources without disturbing the security and reliability of their existing systems.
Along with the rise of renewable energy resources and the retirement of conventional generation, distributed energy resources (DERs) are becoming more prevalent in modern electric grids. DERs are small-scale resources connected at the medium and low voltage distribution networks, which include, but are not limited to, photovoltaics (PV), wind, battery energy storage, and microturbines. With the evident expectation of heavy penetration of DERs in the near future, it has become more important than ever before to enable DERs to provide ancillary grid services. In this regard, DERs can be used independently or through aggregation to provide ancillary services to the grid. Though the contribution of a single DER or a distribution system consisting of multiple DERs to grid services may not be significant, stacked and coordinated contributions from several active distribution systems or aggregators can provide frequency regulation and other grid services at scale. However, the large-scale integration of DERs poses challenges in the planning, operation, and management of an existing power grid. These challenges call for developing a framework that provides avenues for their large-scale integration and assists in employing them for ancillary grid services. In this context, FERC Order 2222 has also established standards to enable and promote the participation of behind-the-meter DERs for several grid services. Whereas the regulations have been formulated, the practical challenges associated with their integration and adoption for ancillary grid services are still a concern for electric power utilities.
This dissertation addresses these critical challenges by developing a comprehensive framework and real-time control strategy to coordinate and optimally dispatch DERs and utility-scale resources for one of the important ancillary grid services, which is secondary frequency regulation. The study is conducted from the perspective of designing a novel mathematical model for implementing secondary frequency regulation at both distribution and transmission levels. A deep reinforcement learning-based strategy is proposed that effectively manages the diverse portfolios of resources, effectively handles the complexities associated with diverse characteristics, and accurately dispatches the resources for Automatic Generation Control (AGC). Furthermore, serverless cloud computing architecture and grid response time analysis are conducted for practical deployment of the proposed secondary frequency control algorithm in real field. Moreover, a comprehensive framework is developed to build electromagnetic transient (EMT) model of large-scale power grid that can be used to validate the proposed secondary frequency control on accurate power system models in real time. In addition, the proposed serverless cloud computing architecture along with integration of simulation in real-time digital simulator (RTDS) provides high fidelity prototype for practical deployment of secondary frequency regulation and is also flexible to implement for other power system control problems.
The results, mathematical models, and large-scale power system models proposed in this study provide major advances and important insights to enable active distribution network consisting of DERs and utility-scale resources for secondary frequency regulation.
In summary, the thesis presented here is that, distributed energy resources, when properly coordinated and controlled, can provide frequency regulation and control at scale.
Department: Electrical and Computer Engineering
Name: Ciaron Nathan Hamilton
Date Time: Wednesday, July 3rd, 2024 - 12:00 p.m.
Advisor: Dr. Yiming Deng
Nondestructive Evaluation (NDE) 4.0 is an emerging approach for providing automation towards material inspection using innovative techniques from Industry 4.0. Such innovative approaches allow for vast data acquisition and analysis potential for physical component assessments that require inspection, or else risk structural failure. Inspection for conductive materials is possible from surface scanning procedures, such as Eddy current testing (ECT). ECT utilizes electromagnetic induction to find defects in conductive materials. In the case of this dissertation, corrosion may be detected with ECT before it continues to grow and damage larger components. Corrosion is “the cancer” of metallics, costing billions in irreversible damages annually. In some instances, corrosion may occur under paints, which may be near invisible through visual inspection. ECT in place may be used, however many components need fast and robust scanning procedures. Fast scanning can be enabled with Eddy current arrays (ECAs), allowing repeated coils that may be used to increase scan areas or cut down scan times, a procedure like a paint brush that obtains information about the material’s health. ECAs also allow for different configurations that may be beneficial for data analysis, such as differential scanning mode. Inspection may be automated using robotic arms systems equipped with ECA, allowing for fast, repeatable, and robust scanning. This may be useful in situations with large components that may be brought into a "robot arm sensor wash" system, such as automobiles or military vehicles. One barrier for enabling robust “freeform” scanning is obtaining the scan path which the ECA will glide along, as components may come in different shapes and sizes, sometimes with curved or complex geometries. The focus of this dissertation is to provide NDE 4.0 techniques along with ECA to detect corrosion along curved steel sheets. NDE 4.0 techniques show capabilities merging cyber-physical systems (CPS), computer vision, and the concept of digital-twins between physical and digital space. To enable NDE 4.0 for robotic inspection, a framework was developed, which has five major steps: obtain a reconstruction of the physical object and surrounding environment, orient this virtual scene with respect to the robot’s base frame, generate a toolpath which the NDE probe will be manipulated, conduct the ECA scan with 6-degrees-of-freedom (6-DOF), and process the NDE results. A novel algorithm was developed, “ray-triangle intersection arrays,” which enables pathing on meshes from a raster pattern. The framework used was designed to be generalized for any surface scanning probe, in which UT scanning for carbon fiber inspection is also demonstrated using the same framework. For ECA, it is important to keep the probe close to the surface while ensuring the distance between the sensor and the probe, or lift-off, is minimized. For the scale of the defects obtained, which is approximately 0.05mm in depth at max, otherwise minor tilts of the probe become significant.
The ECA probe contains 32 channels and was operated at 500khz using absolute mode scanning, allowing for exceedingly small defect depths to be detected. The effects of ECA scanning using a robot system are examined, showing that tilt errors from either the path-planning procedure or even the calibration or the robot will provide significant errors. To better understand the effects per coil, a “full” scan mode was examined, showing a larger image per coil, as well as the typical painting scan considered as a “fast” scan. Other errors such as heating were also examined. With knowledge of the errors from robotic scanning, post-processing procedures were developed to minimize errors. A novel algorithm “array subtraction” was developed to reduce lift-off from common factors seen in every coil, indicating prob tilt error. A digital microscope was used to compare defects ground-truth defect volume with the ECA results, in which defect versus background intersection masking was used. Three hypotheses discussed cover the generalized robust surface scanning framework, the dissection of effects of robotic scanning for ECA for corroded surfaces, and how to process and interpret this ECA data. The results show promising future applications for robust surface scanning as corrosion is decently detected. Future applications would be the previously mentioned carwash system, AI-enabled detection, and mobile platforms to expand on inspection workspaces.
Department: Electrical and Computer Engineering
Name: Hrishikesh Dutta
Date Time: Tuesday, May 7th, 2024 - 3:00 p.m.
Advisor: Dr. Subir Biswas
The proliferation of Internet-of-Things (IoTs) and Wireless Sensor Networks (WSNs) has led to the widespread deployment of devices and sensors across various domains like wearables, smart cities, agriculture, and health monitoring. These networks usually comprise of resource-constrained nodes with ultra-thin energy budget. As a result, it is important to design network protocols that can judiciously utilize the available networking resources while minimizing energy consumption and maintaining network performance. The standardized protocols often underperform under general conditions because of their inability to adapt to changing networking conditions, including topological and traffic heterogeneities and various other dynamics. In this thesis, we develop a novel paradigm of learning-enabled network protocol synthesis to address these shortcomings.
The key concept here is that each node, equipped with a Reinforcement Learning (RL) engine, learns to find situation-specific protocol logic for network performance improvement. The nodes’ behavior in different heterogeneous and dynamic network conditions are formulated as a Markov Decision Process (MDP), which is then solved using RL and its variants. The paradigm is implemented in a decentralized setting, where each node learns its policies independently without centralized arbitration. To handle the challenges of limited information visibility in partially connected mesh networks in such decentralized settings, different design techniques including confidence-informed parameter computation and localized information driven updates, have been employed. We specifically focus on developing frameworks for synthesizing access control protocols that deal with network performance improvement from multiple perspectives, viz., network throughput, access delay, energy efficiency and wireless bandwidth usage.
A multitude of learning innovations has been adopted to explore the protocol synthesis concept in a diverse set of MAC arrangements. First, the framework is developed for random access MAC setting, where the learning-driven logic is shown to be able to minimize collisions with a fair share of wireless bandwidth in the network. A hysteresis-learning enabled design is exploited for handling the trade-off between convergence time and performance in a distributed setting. Next, the ability of the learning-driven protocols is explored in TDMA-based MAC arrangement for enabling decentralized slot scheduling and transmit-sleep-listen decision making. We demonstrate how the proposed approach, using a multi-tier learning module and context-specific decision making, enables the nodes to make judicious transmission/sleep decisions on-the-fly to reduce energy expenditure while maintaining network performance. The multi-tier learning framework, comprising of cooperative Multi-Armed Bandits (MAB) and RL agents, solve a multidimensional network performance optimization problem. This system is then improved from scalability and adaptability perspective by employing a Contextual Deep Reinforcement Learning (CDRL) framework. The energy management framework is then extended for energy-harvesting networks with spatiotemporal energy profiles. A learning confidence parameter-guided update rule is developed to make the framework robust to unreliability of RL observables. Finally, the thesis investigates protocol robustness against malicious agents, thus demonstrating versatility and adaptability of learning-driven protocol synthesis in hostile networking environments.
Department: Electrical and Computer Engineering
Name: Ehsan Ashoori
Date Time: Monday, May 6th, 2024 - 11:00 a.m.
Advisor: Dr. Andrew Mason
Assistive technologies have emerged as powerful tools for assessing physical health and wellness through monitoring physiological parameters such as movement and heart rate. However, our overall health is influenced not only by physiological parameters but also by mental health factors and environmental influences. Therefore, in the pursuit of holistic wellness, assistive technologies need to support multimodal sensing to monitor various aspects of individuals' health, including physiological health, mental wellness, and environmental parameters that influence personal health and wellness. The challenges arise when these technologies must be implemented in real-time and in miniaturized point-of-care platforms where multi-modal sensing algorithms must run efficiently, and resources, including power, are limited. Solving these challenges requires converging engineering practices with psychological and physiological principles. This work aims to implement resource-efficient algorithms to assess social interaction parameters as an important mental health factor and to enable high-performance point-of-care devices to monitor physiological and environmental parameters in a miniaturized and effective manner. In this work, an extensive dataset for human interaction in virtual settings was prepared. Efficient algorithms were developed to identify levels of two highly important social interaction parameters, ‘affect’ and ‘rapport’. We analyzed affect in time intervals based on the conversation turns and analyzed rapport in 30-second time intervals, which is the highest temporal resolution reported in the literature. We achieved an affect prediction accuracy of 77% and a rapport prediction accuracy of 72%, which are the highest reported results for analyzing multi-person groups. Furthermore, to support monitoring physiological and environmental parameters, electrochemical solutions were identified as a highly effective method. We introduced new architecture to overcome limited supply potentials in modern point-of-care devices. In our novel design, the potential window for electrochemical reactions doubles compared to the traditional designs. This, in return, facilitates a significantly wider range of target elements that can be monitored with this novel architecture. Overall, the enhanced algorithms and architecture introduced in this work enable multimodal sensing of important personal health and wellness parameters.
Department: Electrical and Computer Engineering
Name: Yu Zheng
Date Time: Friday, April 5th, 2024 - 8:30 a.m.
Advisor: Dr. Mi Zhang
The significant progress of deep learning models in recent years can be attributed primarily to the growth of the model scale and the volume of data on which it was trained. Although scaling up the model with sufficient training data typically provides enhanced performance, the amount of memory and GPU hours used for training provides great challenges for deep learning infrastructures. Another challenge for training a good deep learning model is the quantity of the data it was trained on. To achieve state-of-the-art performance, it has become a standard way to train or fine-tune deep neural networks on a dataset augmented with well-designed augmentation transformations. This introduces difficulties in efficiently identifying the best data augmentation strategies for training. Furthermore, there has been a noticeable increase in the dataset size across many learning tasks, making it the third challenge of modern deep learning systems. The dataset size becomes very large posing great burdens on storage and training cost. Moreover, it can be prohibitive to perform hyperparameter optimization and neural architecture search on networks trained on such massive datasets.
In this dissertation, we address the fist challenges from a model-centric perspective. We propose MSUNet, which is designed with four key techniques: 1) ternary conv layers, 2) sparse conv layers, 3) quantization and 4) self-supervised consistency regularizer. These techniques allow faster training and inference of deep learning models without sacrificing significant accuracy loss. We then look at deep learning systems from a data-centric perspective. To deal with the second challenge, we propose Deep AutoAugment (DeepAA), a multi-layer data augmentation search method which aims to remove the need of crafting augmentation strategies manually. DeepAA fully automates the data augmentation process by searching a deep data augmentation policy on an expanded set of transformations. We formulate the search of data augmentation policy as a regularized gradient matching problem by maximizing the cosine similarity of the gradients between augmented data and original data with regularization. To avoid exponential growth of dimensionality of the search space when more augmentation layers are used, we incrementally stack augmentation layers based on the data distribution transformed by all the previous augmentation layers. DeepAA achieves the best performance compared to existing automatic augmentation search methods evaluated on various models and datasets. To tackle the third challenge, we proposed a dataset condensation method by distilling the information from a large dataset to a small condensed dataset. The data condensation is realized by matching the training trajectories of the original dataset with that of the condensed dataset. Experiments show that our proposed method outperforms the baseline methods. We also demonstrate that the method can benefit continual learning and neural architecture search.
Department: Electrical and Computer Engineering
Name: Daniel Chen
Date Time: Thursday, April 4th, 2024 - 9:00 a.m.
Advisor: Dr. Jeffrey A. Nanzer
The need for fast and reliable sensing at millimeter-wave frequencies has been increasing dramatically in recent years for a wide range of applications including non-destructive evaluation, medical imaging, and security screening such as concealed contraband detection. Imaging based approaches have been of particular interest since the wavelengths at millimeter-wave frequencies provide good resolution and are capable of propagating through clothing with negligible attenuation allowing the identification of concealed contraband. While various implementations for millimeter-wave imaging have been developed, the new technique of active incoherent millimeter-wave (AIM) imaging, developed in our research group, is of particular interest because it solves fundamental limitations inherent in other approaches. Furthermore, AIM enables imaging with significantly fewer elements than phased arrays and costs less than passive imagers. This is enabled by actively transmitting noise signals, allowing the system to capture scene information in the spatial Fourier domain. When the received signal at each of the array elements are spatio-temporally incoherent, the spatial coherence function of the captured signals represent samples of the measured visibility which can be further processed via an inverse Fourier transformation to recover the measured scene. With a good quality recovered image, additional processing can be applied for detection and/or classification on specific spatial features. However, images often contain more than the required information necessary for effective classification results which means that unnecessary resources are used for the collection and processing of redundant information.
In this dissertation, I present on the design and analysis of array dynamics for radar and remote sensing applications. Specifically, I investigate approaches to measure specific spatial Fourier information which can be useful for direct classification therefore eliminating the need of full image recovery. I present an adapted formulation of the spatial coherence function by considering individual antenna trajectories within a dynamic antenna array. The measured visibility, hence, becomes a function of array trajectory over a slow time dimension. The use of array dynamics further reduces the hardware requirements in the AIM technique by introducing a new degree of freedom in the array design. By allowing the receiving elements of the antenna array to dynamically move across the measurement plane, the spatial Fourier domain can be efficiently sampled using as few as two receiving antennas. Discussion of the effects of trajectory approaches on the measured spatial Fourier information are presented. Furthermore, I expand on a specific array trajectory where as few as two antennas can generate a ring filter (i.e., spatial Fourier sampling function exhibiting a form of a ring) that can efficiently identify spatial Fourier artifacts pertaining to sharp edges in the scene. This approach enables an imageless approach to differentiate scenes containing objects with sharp-edge that are generally made artificially. I then present a real-time rotational dynamic antenna array operating at 75 GHz with two noise-transmitting sources as required by the AIM technique and two receivers to generate the ring filter. Compared to traditional millimeter-wave imaging, this non-imaging approach further reduces the required number of antennas. Experimental measurements using the AIM based rotational dynamic antenna array demonstrate the possibility of detecting concealed contraband via the direct measured spatial Fourier domain information.
Department: Electrical and Computer Engineering
Name: Bharath Basti Shenoy
Date Time: Thursday, December 7th, 2023 - 8:00 a.m.
Advisor: Lalita Udpa and Sunil Chakrapani
Part I of this dissertation defense explores the application of Magnetic Barkhausen Noise and Non-Linear Eddy Current techniques for the early-stage detection of fatigue in ferromagnetic materials, with a specific focus on Martensitic Stainless-steel samples. Due to its exceptional mechanical properties at elevated temperatures, stainless steel finds extensive use in various applications. However, material fatigue poses a significant challenge in steel structures, leading to potential catastrophic damage and substantial economic consequences. While conventional nondestructive evaluation techniques excel at detecting macro defects, they often fall short in identifying material degradation at the microstructure level, particularly arising from fatigue.
The Magnetic Barkhausen Noise technique involves capturing signals generated by the movement of domain walls after applying a time-varying magnetic field. Different fatigue stages yield unique Magnetic Barkhausen Noise signatures, facilitating effective classification. In the Non-Linear Eddy Current technique, a robust external magnetic field induces non-linear behavior in the material's magnetization characteristic. The harmonics extracted from the Non-Linear Eddy Current signal provide insights into the material's microstructure, aiding in the classification of samples at various fatigue stages. The research work systematically investigates the feasibility of Magnetic Barkhausen Noise and Non-Linear Eddy Current techniques by employing customized sensor assemblies to capture and analyze signals in both time and frequency domains. Extracted features are further processed using k-medoids clustering algorithm, and Genetic algorithm for robust classification into distinct fatigue stages. The comparative performance of the two magnetic non-destructive evaluation techniques is thoroughly examined.
The research findings indicate that both Magnetic Barkhausen Noise and Non-Linear Eddy Current techniques present promising capabilities for detecting early-stage fatigue in Martensitic Stainless-steel samples and contributes to advancing the fatigue detection in ferromagnetic structures using magnetic non-destructive evaluation techniques.
In Part II of this dissertation defense, the focus is on addressing critical challenges of monitoring the structural health of engineering structures, which are susceptible to damage from both stress and environmental factors. Traditional ultrasonic nondestructive evaluation techniques typically involve contact-based procedures that necessitate the use of a couplant. However, this thesis explores the use of Electromagnetic Acoustic Transducers, which offer a compelling non-contact alternative. Electromagnetic Acoustic Transducers utilize the Lorentz force, acting on induced currents, to excite elastic waves in a sample, eliminating the need for direct contact. The drawback of conventional Electromagnetic Acoustic Transducers being limited to conductive or ferromagnetic samples is addressed through the introduction of a novel Electromagnetic Acoustic Transducer, specifically designed for non-conductive samples.
This novel Electromagnetic Acoustic Transducer presents two distinct configurations: (a) Direct excitation and (b) Non-contact induced excitation, both utilizing the Lorentz force transduction mechanism. A thorough investigation into the metal patch geometry employed in both
configurations is detailed, providing valuable design insights. The numerical model of these Electromagnetic Acoustic Transducer configurations is developed using COMSOL, and simulation results robustly affirm the feasibility of the proposed approach. By successfully extending the applicability of Electromagnetic Acoustic Transducers to non-conductive samples and introducing the innovative embedded Electromagnetic Acoustic Transducer, this research significantly contributes to advancing the field of structural health monitoring and presents a viable nondestructive evaluation approach for the effective detection of damage in engineering structures.
Department: Electrical and Computer Engineering
Name: Demetris Coleman
Date Time: Friday, December 1st, 2023 - 3:00 p.m.
Advisor: Xiaobo Tan
Autonomous underwater vehicles have a variety of applications such as environmental monitoring, search and rescue, ocean exploration, and fish tracking. One such class of these vehicles is gliding robotic fish, which realize energy-efficient locomotion and high maneuverability by combining buoyancy-driven gliding and fin-actuated swimming. The goal of this dissertation is to endow gliding robotic fish with advanced control capability and autonomy, to facilitate their ultimate applications in aquatic environments.
First, an overview of the gliding robotic fish platform GRACE is presented and design improvements for the third generation of GRACE are discussed. These include adding Iridium satellite-based communication for remote operation, making the robot more robust for ocean operation, and developing a miniaturized version (Mini-Glider) to enable rapid testing of functionality and control algorithms.
Second, a backstepping-based trajectory tracking controller for the energy-efficient gliding-like motion of gliding robotic fish is proposed. The controller is designed to track the desired pitch angle and reference position in 3D space. In particular, under-actuation is addressed by exploiting the coupled dynamics and introducing a modified error term that combines pitch and horizontal position tracking errors. Two-time-scale analysis of singularly perturbed systems is used to establish the convergence of all tracking errors to a neighborhood around zero. The effectiveness of the proposed control scheme is demonstrated via simulation and experimental results.
Next, incorporating observability into control schemes is discussed. Incorporating observability can enhance an observer's ability to recover accurate estimates of unmeasured states, minimize estimation error, and ultimately, allow the original control objective to be achieved. The use of control barrier functions (CBFs) is proposed to enforce observability and thereby encourage convergence of state estimates to the true state in output feedback control schemes. The proposed approach is compared to a model predictive control (MPC)-based alternative that optimizes a weighted combination of an observability surrogate function and the control objective. Motivated by the applications of fish tracking and navigating in GPS-denied environments, the problem of target tracking, when only the distance to the target is measured, is addressed. It is found that both approaches are comparable in terms of observability and estimation error, but the CBF-based approach has an edge in terms of computational efficiency. Experimental validation of the CBF-based scheme is conducted with a Mini-Glider.
To complete this body of work, a strategy for the exploration of unknown scalar fields under localization uncertainty is proposed. The strategy hinges on the concept of the multi-fidelity Gaussian processes (GPs) and sampling-based motion planning for information gathering. It uses multi-fidelity GPs to approximate the environmental field by assigning location-measurement pairs to a particular fidelity based on the level of uncertainty in the location estimate. An informative trajectory planner is then designed that plans not only where the robot should go, but also what types of motion (e.g. swimming, gliding, etc.) the robot should use to best gather information for the reconstruction of the field.
Experiments are carried out on a Mini-Glider for the task of mapping the light field in an indoor tank. The results show that using a multi-fidelity GP model provides a better reconstruction of the field in terms of the weighted mean squared error when compared to using standard GP regression, where the localization error is ignored.
Department: Electrical and Computer Engineering
Name: Hasanur R. Chowdhury
Date Time: Tuesday, November 21, 2023 - 10:00 a.m.
Advisor: Ming Han
High accuracy temperature and strain measurements are prerequisites for many modernindustries to ensure safety, improve efficiency, and reduce greenhouse gas emissions. Traditionalthermocouples or electronic devices often encounter challenges in temperature and strainmeasurement due to cross-sensitivity to surrounding perturbations, sensor’s drift at elevatedtemperature, or susceptibility to electromagnetic interference (EMI). To overcome these, fiberoptic sensors have gained popularity due to their unique advantages, including small size,multiplexing capacity, and immunity to EMI. In this work, we reported a novel approach tomeasure temperature using fiber optic Fabry-Pérot (FP) interferometer, which eliminates cross strain sensitivity, shows linearity at high temperature, and provides high accuracy for a broadrange. In addition, we developed another sensor for simultaneous measurement of temperature andstrain using cascaded fiber Bragg grating (FBG)- silicon FP interferometer configuration.
Our proposed temperature measurement method is based on an air-filled FP cavity, whosespectral notches shift due to a precise pressure variation in the cavity. For fabrication, a fused silica tube is spliced with a single mode fiber at one end and a side-hole fiber at the other to form the FP cavity. The pressure in the cavity can be changed by passing air through the side-hole fibercausing the spectral shift, which is the measurand of temperature. We have developed two novelapproaches based on this setup. The first approach employs two pressure values, theircorresponding interferometric valley wavelengths, and the gas material’s constant (ɑ) to obtaintemperature. A computer-controlled pressure calibration and sensor interrogation system has beendeveloped with miniaturized instruments for this sensor operation. Experimental results show thatthe sensor has a high wavelength resolution (<0.2 pm) for minimal pressure fluctuation (2.5×10-3psi) up to a broad temperature range (over 800 ℃). We analyzed the effect of wavelength noiseand pressure fluctuation on temperature resolution, which reveals that our developed system canobtain a high resolution (±0.32 ℃) temperature measurement. The use of gas as the sensingmaterial and the measurement mechanism also implies long-term stability and eliminates the cross sensitivity to strain.
In the second approach, we used a pair of FP cavities filled with gas of identical but variablepressure. One of the FPs (reference FP) is placed in the cold zone with a known temperature. The temperature of the measuring FP can be deduced by the spectral fringe shift vs. pressure of the twoFPs. This method does not require measurement of the pressure or the knowledge of the opticalproperties of the gas. Hence it facilitates to make the instrumentation simpler and cost-effectiveand data acquisition faster. We have verified this method experimentally up to 800 ℃. The sensorshows good linearity in the range. Long-term test conducted at 800 ℃ exhibited the stability ofthe sensor with fluctuations of ≤0.3% over a duration exceeding 100 hours.In addition to these air-filled FP interferometers, we have presented another novel sensorbased on cascaded fiber Bragg grating (FBG)- silicon FP interferometer (FPI) for simultaneousmeasurement of temperature and strain. The sensor is composed of a 5 mm grating on a singlemode fiber and a 100 µm silicon tip attached to the end of it by UV curable glue. The silicon tip is unbonded, and free from strain whereas the FBG is attached to the host structure. The sensor istested from room temperature to 100 ℃ with varying strain up to ∼150 µε. The silicon FPIprovides high temperature sensitivity of 89 pm/℃ unaffected by strain. On the contrary, the FBGis affected by both thermal and mechanical strain; the sensitivity of these are experimentallyobtained as 32 pm/℃ and 1.09 pm/µε, respectively. With a high-speed spectrometer, thetemperature and strain resolution of the FPI and FBG are found ±1.9×10-3 ℃ and ±0.042 µε,respectively. Due to the small size, enhanced sensitivity and high resolution, this cascaded FBGFPI sensor can be used in practical applications where accurate measurement of temperature andstrain are required.
Department: Electrical and Computer Engineering
Name: Zi Li
Date Time: Tuesday, November 21, 2023 - 10:00 a.m.
Advisor: Yiming Deng
Even after extensive efforts to enhance our understanding of materials, modeling, and system processes, uncertainty continues to be an inevitable factor that impacts system behavior, especially at the operational limits. The evaluation of uncertainty is now a common practice in engineering and scientific fields, encompassing the analysis of experimental data, as well as numerous computational models and process simulations. Non-destructive evaluation (NDE) techniques are widely utilized across a range of industries and applications to guarantee the safety, quality, and dependability of components, systems, and structures. However, NDE processes are often challenged by uncertainties stemming from factors such as material variations, environmental conditions, and measurement limitations, which can introduce complexities into the assessment process. Therefore, there is a need to quantify uncertainties in NDE, which can enhance our comprehension of the constraints and potential inaccuracies linked to NDE inspections and aid in making NDE assessments more robust and reliable. In this thesis, a comprehensive uncertainty quantification (UQ) framework: the Three-Legged Stool (TLS) is proposed to provide systematic guidance in uncertainty analysis for NDE applications.
A Magnetic Flux Leakage-based defect characterization algorithm is proposed to classify the defect and handling uncertainties for pipeline inspection. The research compares Convolutional Neural Network (CNN) and Deep Ensemble (DE) methods for handling input uncertainties from MFL response data, while also employing Autoencoder for data augmentation to address limited experimental data. The study evaluates prediction accuracy and explores uncertainty analysis, emphasizing the importance of reliability assessment in MFL-based NDE decision-making.
To estimate the fatigue life of martensitic-grade stainless-steel turbine blades, a magnetic Barkhausen noise (MBN) technique is applied. This work involves the extraction of time and frequency domain features, followed by the application of techniques such as Principal Component Analysis (PCA) and probabilistic neural network (PNN) for classifying and estimating the remaining fatigue life.
An IMU-assisted robotic SL sensing system was developed for pipeline detection. This system improves registration and defect estimation through a RANSAC assisted cylindrical fitting algorithm, integrates inertial and odometry measurements for precise 3D profiling, and employs customized defect sizing techniques to offer a reliable 3D defect reconstruction solution for various defect shapes and depths.
The proposed TLS-based UQ framework highlights the interdependent dynamics among data, models, and learning when addressing uncertainties in NDE processes. Some advanced and commonly used techniques have been introduced to illustrate how uncertainties in the inputs or parameters of an NDE system, model, or measurement are propagated to the outputs or predictions. The uncertainty propagation is considered in terms of the forward modeling and inverse learning process separately. In order to demonstrate the efficiency and applicability of the proposed framework for NDE applications, the uncertainties in the previously mentioned NDE cases are investigated and quantified using the techniques outlined in the TLS model.
In summary, the proposed UQ framework is able to provide guidance in dealing with uncertainties in NDE inspection with efficient and reliable solutions. It holds great promise and opens up avenues for further research and advancement within the industry.
Department: Electrical and Computer Engineering
Name: Pengyu Chu
Date Time: November 17, 2023 - 11:00 a.m.
Advisor: Zhaojian Li
ROBUST FRUIT DETECTION AND LOCALIZATION FOR ROBOTIC HARVESTING
Automated apple harvesting has attracted significant research interest in recent years due to its potential to revolutionize the apple industry, addressing the issues of shortage and high costs in labor. One key enabling technology towards automated harvesting is robust apple detection and localization, which poses great challenges because of the complex orchard environment that involves varying lighting conditions and foliage/branch occlusions. In this dissertation, we first propose a suppression Mask RCNN to generally improve the accuracy for apple detection. Our developed feature suppression network significantly reduces false detection by filtering non-apple features learned from the feature learning backbone. At the same time, we propose a novel deep learning-based object detection method Occluder-Occludee Relational Network (O2RNet), which addresses the challenge of detecting and isolating clustered apples in apple orchards. Previous object detection techniques have exhibited l imited success in handling fruit occlusion and clustering, which are common issues in agricultural settings. To overcome these challenges, O2RNet employs a two-stage approach. In the first stage, we use a custom deep Feature Pyramid Network (FPN) architecture to generate candidate regions of interest (ROIs) for potential fruit objects. The second stage feeds these candidate ROIs into the occluder branch and occludee branch respectively using a feature expansion structure (FES). By leveraging this two-stage approach, O2RNet can effectively isolate individual apples from clustered regions, thereby facilitating accurate apple detection.
Then, we propose Active Laser-Camera Scanning (ALACS) to achieve a high-precision 3D localization of detected apples and overcome existing localization challenges like varying illumination conditions, complex occlusion scenarios, and limited geometric information. The hardware of ALACS includes a red line laser, an RGB camera, and a linear motion slide. All these components are seamlessly integrated for fruit localization by using an active scanning scheme and laser-triangulation technique. The technique integrates semantic information from O2RNet's detection results with bounding boxes to generate accurate 3D coordinates for each detected apple.
Additionally, we propose Skeleton-lead Segmentation Network (SkeSegNet) and introduce it to the Panoptic-Deeplab. SkeSegNet is used to address the challenges of segmenting complex branches by treating branches as a combination of skeletons. Combined with depth map, SkeSegNet generates 3D branches for efficient obstacle avoidance.
Lastly, we evaluate each approach in the comprehensive experiments and superior experimental results demonstrated the effectiveness of the proposed approaches.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Electrical and Computer Engineering at 355-5066 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Electrical and Computer Engineering
Name:
Piyush Gupta
Date Time:
Monday, October 26, 2023 - 10:00am
Location:
EB Room 1420
Announcement:
ABSTRACT
Advisor: Dr. Vaibhav Srivastava
Human-in-the-loop systems play a pivotal role in numerous safety-critical applications, ensuring both safety and efficiency in complex operational environments. However, these systems face a significant challenge stemming from the inherent variability in human performance, influenced by factors such as workload, fatigue, task learning, expertise, and individual differences. Therefore, effective management of human cognitive resources is paramount in designing efficient human-in-the-loop systems.
To address this challenge, it is critical to design robust and adaptive systems capable of continuously adapting models of human performance, and subsequently providing tailored feedback to enhance it. Effective feedback mechanisms play a pivotal role in improving the overall system performance by optimizing human workload, fostering skill development, and facilitating efficient collaboration among individuals within diverse human teams, each with their unique skill sets and expertise.
In this dissertation, the primary focus lies in exploring optimal and game-theoretic approaches for feedback design to enhance system performance, particularly in scenarios where humans are integral components. We begin by studying the problem of optimal fidelity selection for a human operator servicing a stream of homogeneous tasks, where fidelity refers to the degree of exactness and precision while servicing the task. Initially, we assume a known human service time distribution model, later relaxing this assumption. We design a human decision support system that recommends optimal fidelity levels based on the operator’s cognitive state and queue length. We evaluate our methods through human experiments involving participants conducting underwater mine searches.
We extend the optimal fidelity selection problem by incorporating uncertainty into the human service-time distribution. This extension involves the development of a robust and adaptive framework that accurately learns the human service-time model and adapts the policy while ensuring robustness under model uncertainty. However, a major challenge in designing adaptive and robust systems arises from the conflicting objectives of exploration androbustness. To mitigate system uncertainty, an agent must explore high-uncertainty state space regions, while robust policy optimization seeks to avoid these regions for worst-case performance. To address this trade-off, we introduce an efficient Deterministic Sequencing of Exploration and Exploitation (DSEE) algorithm for model-based reinforcement learning. DSEE interleaves exploration and exploitation epochs with increasing lengths, resulting in sub-linear cumulative regret growth over time.
In addition to cognitive resource management, enhancing human performance can also be achieved through task learning and skill development. In this context, we study the impact of evaluative feedback on human learning in sequential decision-making tasks. We conducted experiments on Amazon Mechanical Turk, where participants engaged with the Tower of Hanoi puzzle and received AI-generated feedback during their problem-solving. We examined how this feedback influenced their learning and skill transfer to related tasks. Additionally, we explored computational models to gain insights into how individuals integrate evaluative feedback into their decision-making processes.
Lastly, we expand our focus from a single human operator to a team of heterogeneous agents, each with diverse skill sets and expertise. Within this context, we delve into the challenge of achieving efficient collaboration among heterogeneous team members to enhance overall system performance. Our approach leverages a game theoretic framework, where we design utility functions to incentivize decentralized collaboration among these agents.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Haojun Wang
Date Time:
Tuesday, August 22, 2023 - 11:00am
Location:
C-103 Engineering Research Complex and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Hogan
This dissertation presents two innovative contributions in the realm of materials science and nanotechnology. The first part introduces a novel integrated photodetector design, combining the two-dimensional material MoS2 with plasmonic nanoantenna arrays (NAs). These gold NAs were fabricated by e-beam lithography and strategically positioned above and below a MoS2 semiconductor layer. The nanoarrays led to significant local electric field enhancement through the thickness of the MoS2 layer at the nanoantenna interface and a resulting optical detection enhancement factor of up to 25. The fabrication process of the photodetector is detailed in this dissertation, encompassing MoS2 nanosheet transfer, NAs patterning, and layered NAs alignment. Experimental and simulation-based characterizations affirm the potential of the proposed integrated photodetector for enhanced optical field absorption and detection, with applications in photodetection and nonlinear optical processes.
The dissertation then delves into the electrical characterization of MoS2-based photodetectors, concentrating on photosensitivity and optimization parameters. Notably, the incorporation of the NAs significantly enhances electron-hole pair generation and reduces resistance. Optimized conditions for high net photocurrent and minimal power consumption are identified. Moreover, the nonlinear absorption behavior of the NAs-integrated devices is investigated, revealing the exceptional nonlinear optical properties of the double-layered NA/MoS2/NA structure. This structure exhibits strong two-photon absorption and provides valuable insights into nonlinear absorption processes, promising applications in near-infrared detection, energy harvesting, and spectroscopy of organic materials.
In the second part of the dissertation, a groundbreaking technique called reactive pulsed laser deposition of SiC is introduced. This technique allows precise and controlled deposition of a large number of SiC particles. The process involves a pulsed laser generating a localized hot spot on a target source, resulting in the ejection of silicon (Si) and carbon (C) atoms that combine to form SiC nanoparticles on the substrate surface. The fabricated SiC particles display intriguing photoluminescent properties and enable the production of a diode with distinct current rectification behavior. The experimental results demonstrate the efficacy of the reactive pulsed laser deposition technique, showcasing its potential for advancing the localized fabrication of SiC-based electronic devices and structures.
This Ph.D. dissertation significantly contributes to the understanding of integrated photodetectors, nonlinear optical effects, and precise material deposition techniques. The insights gained pave the way for enhanced optical field and absorption in photodetection, nonlinear optical processes, SiC-based devices, and open up new avenues for diverse research fields.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Abu Farzan Mitul
Date Time:
Monday, August 7, 2023 - 3:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Prof. Ming Han
Optical fiber sensors are employed to study and investigate several physical-related parameters e,g, pressure, stress, vibration, rotation, current, bending, displacement etc. Moreover, optical fiber sensors are also used for several kinds of chemical parameters i,e, composition, level, liquid flow, concentration, gases detection and monitoring it`s presence. Fiber optic sensor (FOS) technology is associated with optical related accessories e,g, light processing (filters), optical source (laser, LED), optical detector (spectrometer. photodiode), light guiding (lenses) etc. In addition to the use of FOS technology, laser Diode (LD) has great deal of importance due to its easy integration, small size and moderate price. Semiconductor lasers are complex nonlinear systems where relatively small optical feedback can have a profound impact on the spectral and temporal behavior of the laser output. Under appropriate conditions, optical feedback provides a straightforward and highly effective way for laser linewidth reduction. These conditions can be met in the so-called self-injection locking [1, 2] or filtered optical feedback [3, 5] configurations where part of output light, after passing through an optical resonator, is injected back to the laser to interfere coherently with the light inside the laser internal cavity. Due to the excellent noise performance and straightforward implementation, fiber-pigtailed lasers under self-injection locking have been studied as light sources for fiber-optic sensor systems whose performance is sensitive to laser frequency noises such as phase-sensitive optical time-domain reflectometry systems and fiber-optic gyroscopes [6, 7, 8].
We present a method to suppress the wavelength drift of a semiconductor laser with filtered optical feedback from a long fiber-optic loop. The laser wavelength is stabilized to the filter peak through actively controlling the phase delay of the feedback light. A detailed steady-state analysis of the laser wavelength is performed to illustrate the method. Experimentally, the wavelength drift was reduced by 75% compared to the case without phase delay control. The active phase delay control had negligible effect on the line narrowing performance of the filtered optical feedback to the limit of the measurement resolution.
The long optical feedback length makes the lasers prone to mode-hopping. There have been reported attempts at suppressing mode-hopping by light polarization control and using more compact resonator [6, 7] but no detailed characterization of mode-hopping and the associated laser instability has been reported. We studied the mode-hopping and laser instability of the self-injection locked laser and found that a mode hopping event causes an abrupt change in the laser intensity after the resonator inside the feedback loop. Experiment shows that the frequency of locked lasers could oscillate during unstable operations. The fundamental frequency is determined by the time delay of the feedback light.
We demonstrate the use of a self-injection locked distributed feedback (DFB) diode laser for high-sensitivity detection of acoustic emission (AE) using a fiber-coil Fabry-Perot interferometer (FPI) sensor. The FPI AE sensor is formed by two weak fiber Bragg gratings on the ends of a long span of coiled fiber, resulting in dense sinusoidal fringes in its reflection spectrum that allows the use of a modified phase-generated carrier demodulation method. The demodulation method does not require agile tuning capability of the laser, which makes the self-injection locked laser particularly attractive for the application. Little work has been reported on using self-injection locked lasers in fiber-optic AE or ultrasonic sensor systems due to the challenges induced by the lack of the agile wavelength tuning capability of a self-injection locked laser. Experimental results indicate that the self-injection locked laser increases the signal-to-noise ratio by ~33 dB compared with the free-running DFB laser.
Furthermore, we have developed a low-cost fiber-optic sensor system that can measure absolute strain at multiple positions along a fiber using fiber-bragg grating sensors. A challenge in absolute strain measurement from an optical interferometer is that the order of fringes that can be recorded is high and typically cannot be determined precisely. A small strain could lead to a spectral shift of multiple orders, resulting in ambiguity in determine the absolute strain. In this system, we form a “rf interferometer” with low order fringes to ensure that the strain-induced rf spectral shift does not exceed half of the fringe period, rendering the possibility of absolute strain measurement. The spectral shift is limited to be within half of the fringe period to eliminate phase ambiguity and obtain the absolute strain.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Dong Chen
Date Time:
Tuesday, July 25, 2023 - 10:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Zhaojian Li
Recently, autonomous systems such as robots and autonomous vehicles are emerging as promising solutions to improve efficiency and overcome the global labor shortage. These systems are often able to operate independently and with high scalability. However, their control presents unique challenges due to the high dimensionality of their state spaces and the complexity of interactions between their various components. Conventional control methods often struggle to manage real-time control for large-scale autonomous systems due to the inherent complexity and unpredictability of these systems. Fortunately, reinforcement learning (RL) algorithms, especially multi-agent reinforcement learning (MARL), have emerged as effective solutions, addressing the complexities of autonomous system control through their adaptive online capabilities and their proficiency in solving intricate problems. In this thesis, three distinct deep MARL algorithms are explored for efficient and scalable autonomous system control with large-scale autonomous agents. To demonstrate the effectiveness of these approaches, we test these MARL algorithms on practical and real-world applications such as power grids and autonomous driving.
In the first algorithm, an efficient and scalable MARL framework is developed specifically for dynamic traffic scenarios, where the communication topology can be time-varying. This framework leverages parameter sharing and local rewards to encourage cooperation between agents, while still maintaining impressive scalability. To significantly reduce the collision rate and expedite the training process, a novel priority-based safety supervisor is incorporated into the framework. Furthermore, a gym-like simulation environment is developed and open-sourced with three different levels of traffic densities. Comprehensive experimental results show that the proposed MARL framework consistently outperforms several state-of-the-art benchmarks and shows its significant potential for use in the control of autonomous systems in dynamic environments.
In our second exploration, we propose a fully-decentralized MARL framework for Cooperative Adaptive Cruise Control (CACC). This approach differs substantially from the conventional centralized training and decentralized execution (CTDE) method. Here, each agent makes decisions based solely on its local observations and individual rewards without the need for a central controller. To address the non-stationarity issues inherent in systems with partial observability, we further introduce a quantization-based communication protocol to enhance communication efficiency by applying random quantization to the messages being communicated and ensuring that critical information is transmitted with minimized bandwidth usage. We evaluate this approach in two distinct CACC environments, showing that our proposed approach outperforms existing approaches in both control performance and communication efficiency.
In our third exploration, we propose an efficient MARL algorithm tailored specifically for cooperative control within power grids. Specifically, We focus on the decentralized inverter-based secondary voltage control problem inherent in distributed generators (DGs) and formulate it as a cooperative MARL problem. We then introduce a novel on-policy MARL algorithm, named PowerNet, where each agent (i.e., each DG) learns a control policy based on (sub-)global reward, as well as encoded communication messages from its neighbors. Furthermore, a novel spatial discount factor is introduced to mitigate the effect of remote agents, expedite the training process and improve scalability. Moreover, a differentiable, learning-based communication protocol is developed to strengthen collaboration among neighboring agents. In order to facilitate training and evaluation, we develop and open-source PGSim, a highly efficient, high-fidelity power grid simulation platform. Our experimental results in two microgrid setups demonstrate that PowerNet not only outperforms the conventional model-based control method but also surpasses several state-of-the-art MARL algorithms.
Department:
Electrical and Computer Engineering
Name:
Shivam Bajaj
Date Time:
Wednesday, July 19, 2023 - 1:00pm
Location:
3112 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Shaunak D Bopardikar
The advancement in technology, especially Unmanned Aerial Vehicles (UAVs) or drones, has helped mankind in many aspects of everyday life, such as environmental monitoring and in surveillance. However, an easy access to UAV technology has spurred its malicious use, leading to numerous attempts of flying UAVs into restricted areas or in public places. One possible way to counter against such adversarially intruding UAVs is to tag or disable them before they reach a specified location by using superior drones. But the problem of how to plan the motion of these drones, i.e., designing algorithms that have provable guarantees on the numbers of adversarial UAVs that can be disabled has remained an open problem.
This dissertation addresses design of control strategies and online algorithms, i.e., algorithms that do not have information about the intruders a priori, for drones to pursue and disable one or many intruders and is divided into two parts. The first part involves many, possibly infinite, intruders that move directly towards a region of interest. For this scenario, we design decentralized as well as cooperative online algorithms with provable worst-case guarantees for 1) a single drone defender, 2) a team of homogeneous defenders, and 3) a team of heterogeneous defenders. The aim of such defender drones is to capture as many intruders as possible that arrive in the environment. To quantify how well the algorithms perform in the worst-case, we adopt a competitive analysis technique. In particular, the algorithms designed in this dissertation exhibit a finite competitive ratio, meaning that the performance of an online algorithm is no worse than a finite value determined in this dissertation. We also determine fundamental limits on the existence of online algorithms with finite competitive ratios.
In terms of heterogeneity, the first part addresses drones of different capabilities as well as motion models, such as a drone and a turret operating in the same environment. The second part of this dissertation considers coupling between the motion. Specifically, this part considers a turret or a laser attached to a drone. The drone is modelled as a planar Dubins vehicle and the laser with a finite range, which is attached to the Dubins vehicle, can rotate clockwise or anti-clockwise. We design an optimal control strategy for both the Dubins vehicle and the laser such that a static target, located in the environment, is tagged in minimum time. By applying Pontryagins maximum principle, we establish cooperative properties between the laser and the Dubins vehicle. We further establish that the shortest path must lie in a family of 13 candidate paths and characterize the solutions to all of these types.
Department:
Electrical and Computer Engineering
Name:
Ibrahim M. Allafi
Date Time:
Wednesday, May 24, 2023 - 10:00am
Location:
2219 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Shanelle N. Foster
Permanent magnet synchronous machines (PMSMs) are widely used in various industries such as transportation, manufacturing and renewable energy. The simple structure of direct torque control (DTC) coupled with its encoderless operation and fast dynamics are of great interest for PMSMs. Nevertheless, the occurrence of faults, such as turn-to-turn short circuit, high resistance contact, static eccentricity and partial demagnetization, remains a concern. Faults can prevent smooth drive operation of DTC and potentially lead to catastrophic losses if not detected and mitigated in their early phases. Hence, fault diagnosis of DTC driven PMSMs is paramount to ensuring reliable drive operation.
An essential aspect of developing effective fault diagnosis is to understand the impact of faults on drive operation and its corresponding reaction. A comprehensive examination of the nonlinear behavior of flux and torque hysteresis comparators in DTC driven PMSMs provides insight. It is shown that DTC can tolerate low-severity faults within the controller bandwidth while continuing to operate normally. However, when flux and torque errors exceed the bandwidth, DTC counteracts by introducing negative sequence voltages and torque angle variations which impacts fault diagnosis and control under faulty conditions.
Many existing fault diagnosis methods are based on field oriented control (FOC); however, it is not well understood how these methods translate to DTC driven PMSMs. Machine Voltage Signature Analysis (MVSA) is the most commonly used approach for fault diagnosis in electric machines. However, the use of DTC introduces challenges for adoption MVSA due to its nature of compensation, structure and regulation principle. A novel fault diagnosis approach for DTC driven PMSMs is developed. This approach maintains the simple structure of DTC, removes the need for complex signal processing tools, and relies solely on the available signals in the drive. The occurrence of faults results in unique deviations in the direction and magnitude of the commanded voltages in the stator flux linkage (MT) frame enabling fault detection, classification, and severity assessment.
Ultimately, the fault diagnosis algorithm used for inverter driven PMSMs should be effective and applicable irrespective of the control type. A comprehensive fault diagnosis approach is developed based on active and reactive power signature analysis. This data driven algorithm uses spectral components of the power signals as fault indicators. It is shown that this developed algorithm is capable of fault diagnosis in both FOC and DTC driven PMSMs.
The reliability of inverter driven PMSMs depends on the ability to monitor its state of health during operation. It is necessary to detect that a fault occurred, identify fault type as well as estimate its severity. Classification algorithms are used to separate fault types and estimate fault severity. Here, the performance of three classification algorithms is evaluated for inverter driven PMSMs. The classification algorithms are linear discriminate analysis (LDA), k-nearest neighbor (k-NN), and support vector machines (SVM). The SVM classifier is shown to be a highly effective method for detecting and classifying faults in PMSMs controlled by either drive, even with limited training data and high noise levels.
Journal Publications:
1. I. M. Allafi and S. N. Foster, “Condition Monitoring Accuracy in Inverter-Driven Permanent Magnet Synchronous Machines Based on Motor Voltage Signature Analysis,” Energies, vol. 16, no. 3, p. 1477, Feb. 2023, doi: 10.3390/en16031477.
2. A. Aggarwal, I. M. Allafi, E. G. Strangas and J. S. Agapiou, "Off-Line Detection of Static Eccentricity of PMSM Robust to Machine Operating Temperature and Rotor Position Misalignment Using Incremental Inductance Approach," in IEEE Transactions on Transportation Electrification, vol. 7, no. 1, pp. 161-169, March 2021, doi: 10.1109/TTE.2020.3006016.
Journals under Review:
1. I. M. Allafi and S. N. Foster, “Power Signature Based Fault Diagnosis of Inverter-Driven Permanent Magnet Synchronous Machines,” in IEEE Transactions on Industry Applications, 2023
Conference Proceedings:
1. I. M. Allafi and S. N. Foster, "Condition Monitoring of Direct Torque Controlled Permanent Magnet Synchronous Machines," 2022 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 2022, pp. 1-7, doi: 10.1109/ECCE50734.2022.9948136.
2. I. M. Allafi and S. N. Foster, "On the Accuracy of Frequency Based Fault Diagnosis for DTCdriven PMSM," 2022 International Conference on Electrical Machines (ICEM), Valencia, Spain, 2022, pp. 1628-1634, doi: 10.1109/ICEM51905.2022.9910619.
3. I. M. Allafi and S. N. Foster, "Fault Detection and Identification for Inverter-Driven Permanent Magnet Synchronous Machines," 2021 IEEE 13th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), Dallas, TX, USA, 2021, pp. 358-364, doi: 10.1109/SDEMPED51010.2021.9605501.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Cristian Javier Herrera-Rodriguez
Date Time:
Wednesday, May 10, 2023 - 12:00pm
Location:
C103 Engineering Research Complex and Zoom
Announcement:
ABSTRACT
Advisor: Timothy Grotjohn
Diamond is one of the most promising semiconductor materials for high power and highfrequency electronic device applications because of its exceptional mechanical, electronic and thermal properties that include wide band-gap, high breakdown electric field, high carrier mobility and high thermal conductivity. All diamond Schottky diodes and field effect transistors, as well as diamond/gallium oxide heterojunction pn diodes, were designed with Sentaurus TCAD simulations and then fabricated and tested.
Schottky Barrier Diodes (SBDs) are unipolar devices formed with a potential barrier at a metal0semiconductor interface. SBDs are good for fast switching and they have a low voltage drop in the forward biased regime. Diamond based SBDs were fabricated on layered highly/lightly boron doped (p+/p- respectively) epilayers on diamond substrates. Tested diodes showed good behavior with some non-ideal characteristics. Simulations were done in Sentaurus with a nonideal metal-insulator-semiconductor interface for the Schottky contact to get agreement of the modeled and measured diode characteristics.
Diamond pn devices are promising for ultra-high voltage applications (>10kV), however diamond pn junctions have limitations due to (1) a high turn-on voltage (~5V) giving a significant on-state voltage drop and (2) n-type diamond having higher resistivity and poor ohmic contacts. An alternative n-type ultra-wide bandgap (UWBG) semiconductor with shallow donor dopants is β-Gallium Oxide (β-Ga2O3). Gallium oxide has gained significant attention due to its attractive properties like its wide bandgap (4.85eV) and high breakdown electric field in the range of 8 MV/cm. Diamond’s outstanding thermal properties can serve as a heat spreader for high power operation, which can compensate for the poor thermal conductivity of β-Ga2O3. The combination of p-type diamond and n-type Ga2O3 give the advantages of high thermal conductivity, good diamond p-type conduction, and good Ga2O3 n-type conduction. A pn junction model was developed in Sentaurus that included trap-assisted current flow at the heterojunction interface. Fabricated and tested p-type diamond and n-type Ga2O3 diodes are compared to simulations to understand the current flow mechanisms.
Diamond field effect transistors (FETs) can be built in various configurations including lateral metal-semiconductor FETs (MESFETs) and vertical junction FETs (JFETs), which are designed/simulated, fabricated and tested in this work. The MESFET was tested over a wide temperature range from 300 K to 700 K with the drain current almost constant from 425-700 K. Diamond material models of carrier ionization and mobility versus temperature were used in the Sentaurus simulations. A vertical JFET was designed/simulated and the fabrication processes were developed. The JFET showed gate control of the drain current, however the device leakage currents were high due to unwanted current conduction in selective area diamond growth regions.
Department:
Electrical and Computer Engineering
Name:
Luke Baumann
Date Time:
Wednesday, May 10, 2023 - 12:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Shanker Balasubramaniam
Integral equations in Computational Electromagnetics (CEM) are one branch of diverse field. There are many methods to solve for electromagnetic scattering and transmission with boundary integral equations being one of the most efficient. This is due to only needing to discretize the surface of the object and leads to smaller, dense systems as opposed to the larger, sparse systems encountered in Finite Element Method (FEM). There are additional methods that combine boundary integral methods with FEM, namely Finite Element Boundary Integral (FEBI), which have the flexibility of using the more appropriate method as needed for a given region.
Within the subfield of boundary integral equations, there are many parts including the formulations, representation, testing, singularity treatment, numerical methods such as acceleration techniques, iterative and direct solvers, preconditioning, etc. In this thesis, I will present several new and existing formulations using the same formulation framework, demon strate how to perform the integrals for analytic and piecewise basis and testing functions, show how to modify acceleration techniques for a wide range of integral equations, and show results of analysis throughout as needed.
The new formulations are well-conditioned, free from traditional breakdowns, and comparable to existing state-of-the-art formulations. They share the majority of their implementation with the formulations they are compared against to limit any unintended comparisons.
Department:
Electrical and Computer Engineering
Name:
Jacob Hawkins
Date Time:
Thursday, May 11, 2023 - 12:00pm
Location:
3105 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Shanker Balasubramaniam
Integral equations are used to analyze scattering from electromagnetic fields incident upon a perfect electrically conducting (PEC) object. Some common formulations are the electric field integral equation (EFIE), magnetic field integral equation (MFIE), and combined field integral equation (CFIE). Each of these formulations has challenges. The operator in the EFIE is ill-conditioned, and the formulation is non-unique. The operator in the MFIE is well-conditioned, but the formulation is also non-unique. The CFIE (a weighted sum of the EFIE and MFIE) is also ill-conditioned, but the formulation is unique. Due to provable uniqueness, the CFIE is often used in scattering analysis for closed, PEC objects.
One approach to improve conditioning for the CFIE is to use well-known Calderón identities and precondition the EFIE with the EFIE. These identities prove the EFIE operator acting on the EFIE operator is equal to a sequence of second-kind MFIE type operators. The Calderón preconditioner is often constructed with a lossy wavenumber to preserve the uniqueness of the CFIE formulation. The EFIE acting on the EFIE is analytically well-behaved but fraught with difficulties once the equations are discretized using the Method-of-Moments technique. The crux of the problem is the EFIE operator maps a div-conforming function to a curl-conforming function. Quasi-curl-conforming-divergence-conforming basis sets such as Buffa-Christiansen functions are needed to properly discretize the formulation, and these functions require significant, additional computation compared to the divergence-conforming RWG functions often used to discretize the CFIE.
This thesis takes a different starting point to solve the scattering problem for PEC objects. Instead of the CFIE, the decoupled field integral equation (DFIE) and decoupled potential integral equation (DPIE) are used to avoid low-frequency and dense-mesh breakdown, topology breakdown, and resonances (all of which contribute to ill-conditioning) for PECs. Also, the operators in the DPIE and DFIE map curl-conforming functions to curl-conforming functions and divergence-conforming functions to divergence-conforming functions. However, these formulations are not generally well-conditioned at high frequencies.
The primary contribution of this thesis is a new set of Calderón identities which may be used to construct O(N) preconditioners for a unique and wideband well-conditioned formulation of the DPIE or DFIE constrained to PEC objects. The new formulations are accelerable with fast methods like the multi-level fast multipole method (MLFMM) and open the door to quick and accurate computation of scattered fields from multi-scale and electrically large PEC objects using only RWG functions.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Elliot Xin Lu
Date Time:
Tuesday, May 9, 2023 - 2:00pm
Location:
1400 Biomedical and Physical Sciences Building and Zoom
Announcement:
ABSTRACT
Advisor: Carlo Piermarocchi
The study of quantum optics is principally concerned with investigating light-matter interactions. Within the discipline, computational simulation is a burgeoning field that can lend new insights into optical phenomena previously uncovered by theory or experiment. Collective emission effects such as superradiance serve as one prominent example. In contrast to ordinary emissions, superradiance involves dipolar coupling within optical ensembles and produces a coherent burst of radiation whose intensity scales with the square of the number of emitters. Whereas theoretical results involving superradiance are often shoehorned into small, ideal systems, numerical simulations permit the examination of much larger realistic systems, and can further aid in verifying experimental results. Studies of other phenomena, such as polarization enhancement, inhomogenenous broadening, and subradiance, benefit similarly.
To design new systems that exploit quantum optical effects, we devise in this thesis a new numerical approach that can faithfully simulate dynamics of optical active media. Such material are characterized by their ability to modify and re-emit radiation. Nanoscale semiconductor particles known as quantum dots serve as a prime example. Their larger dipole moments– compared to atoms– enable them to experience strong interactions with radiation fields, and permit the observation of a variety of optical phenomena, including superradiance. Despite this merit, numerical simulation of large ensembles of quantum dots–and for long time periods–is challenging. In contrast to previous counterparts, our computational model, which involves the solution to the Maxwell-Bloch equations via integral operator electric fields, is massively scalable in both time and space. This is facilitated by the Adaptive Integral Method (AIM), which effects FFT-based convolutions to evaluate the field. This allows us to perform large scale simulations that reproduce optical effects such as superradiance.
To demonstrate the fidelity of our approach, we evaluate the rate of photon emission from our ensemble and show that it reproduces the quadratic scaling of superradiance. In simulations of medium-sized (𝑁 = 50 − 300) ensembles of quantum dots in a Gaussian cloud, we confirm this quadratic scaling by subtracting independent emissions from total emissions. We also observe anisotropy of emission–another hallmark of superradiance–in the field radiated by the Gaussian cloud. Subradiance is revealed in steady state plots of the population excitation, which display diminished emissions. This effect is amplified by inhomogeneous broadening, which induces greater disorder and thus interference within the ensemble, but diminished by the presence of collective Lamb shifts.
Additionally, we compare the results of this calculation to those using another formalism, the Master equation. By applying zero-averaging random initial conditions to the polarization, we achieve strong numerical agreement between the two approaches. We observe both superradiant scaling, and destructive interference among dots separated by half-wavelengths. We remark, however, that the Maxwell-Bloch model is superior to the Master equation in resolving time delays and capturing propagation and memory effects. Hence, simulations involving ensembles of emitters separated far apart in space should opt for the Maxwell-Bloch approach to accurately account for delay effects.
List of publications
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Omkar H. Ramachandran
Date Time:
Monday, May 8, 2023 - 10:30am
Location:
3405 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Shanker Balasubramaniam
The simulation of systems involving charged particles moving in the presence of electromagnetic fields is of great interest in a number of domains in physics, with applications including the characterization of pulsed power devices and accelerators, design of high precision etching and sterilization implements. As a result, several methods have been proposed to accurately simulate such systems. One such method is the particle-in-cell (PIC) technique, which characterizes the distribution of aplasma in phase space through a collection of statistically significant macroparticles. While contemporary implementations of electromagnetic PIC (EM-PIC) have typically relied on a finite-difference time-domain (FDTD) stencil to evaluate the fields, there has been a push for the adoption finite element methods that allow for the use of better geometry representations and more robust function spaces. In particular recent developments in the field have focused on developing implicit, unconditionally stable finite element field solvers that are free of mesh dependent stability constraints while natively conserving fundamental quantities such as charge and energy within a PIC scheme.
The goals of this dissertation are to develop efficient, charge-conserving, unconditionally stable finite element particle-in-cell (EM-FEMPIC) methods. First, (i) we construct a formulation of PIC built around exponential predictor-corrector particle integrators. We demonstrate that this approach has significantly better error convergence than equivalent polynomial methods, thus allowing for accurate evaluation of particle trajectories even at the large step-sizes afforded by implicit EM solvers. Next, (ii) for devices of a narrowband tendency, we construct a novel EMFEMPIC method based on envelope tracking. This allows us to accurately simulate the EM response of such a device while sampling at the narrow bandwidth, rather than at the highest absolute frequency of interest. Furthermore, we explore the consequences on charge-conservation for such a method and propose a rubric to ensure exact satisfaction of Gauss' laws. We then consider (iii) the matter of energy conservation in an EM-FEMPIC scheme and propose a set of guidelines that ensure the conservation of average energy over the course of a simulation. Finally, (iv) we reformulate a parameter extraction method originally proposed for efficient device-agnostic simulation of EM systems attached to lumped nonlinear devices to make it applicable to a system of moving particles. We couple this approach with a domain-decomposition framework to construct an efficient, 'particle-agnostic' extraction framework. Taken together these contributions address several open problems in the field and extend the applicability of EM-FEMPIC methods to larger, more relevant problems.
Journal Papers
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Taha Yasin Posos
Date Time:
Monday, May 1, 2023 - 11:00am
Location:
2219 Engineering Building
Announcement:
ABSTRACT
Advisor: Sergey Baryshev
Large-area field emission cathodes made from carbon nanotube (CNT) fiber have long been promising as the next generation electron sources for high-power radio frequency (rf) or microwave vacuum electronic devices (VEDs). CNTs have excellent field emission properties such as low turn-on voltage and high output current at electric fields as low as ~10 MV/m, as compared to the legacy metal emitter technology. Therefore, CNT technology has the potential to decrease the operating voltage and simplify VED systems. However, in addition to high beam charge, beam-driven radiation sources require electron beams with low emittance (i.e. high brightness), which must be provided in a stable continuous fashion. Although there have been many studies on CNT fibers' emission current performance, there is not sufficient research on their emission uniformity, emittance, brightness, and overall upper performance limitations specific to the CNT material itself. The lack of these important characterization metrics led to the work presented in this thesis. Not only were the conventional current-voltage (I-V) relations measured and evaluated, but also the electron beams carrying the currents were monitored in situ in real-time by projecting the beam onto a scintillator screen in a custom field emission microscope. These enabled the measurement and evaluation of emittance and brightness. The existing bottlenecks limiting the fiber's performance were uncovered for the first time and new advanced CNT fiber cathode designs were proposed and engineered accordingly.
In Chapter 2, various standard (previously attempted) designs of CNT fiber cathodes were tested in the field emission microscope. The results showed that all cathodes had high emittance, low brightness, a large beam spread, non-uniform emission, current saturation, and instability. Hot spots and microbreakdowns were observed during emission. Analysis of the data revealed that all these problems were due to the formation of stray emitters on the cathode surface during emission. It was concluded that the tested fibers failed to provide any reasonable beam quality regardless of the cathode geometry.
Exceptionally non-uniform current emission observed in the experiments raised the question about the mechanism of current saturation when the output charge failed to keep up with the increasing electric field. In Chapter 3, a computational method was developed to extract the emission area from the emission micrographs and then calculate the emission current density. It was found that the current density saturated quickly and stopped obeying the Fowler-Nordheim law. It was demonstrated that the saturation effect occurred because the local current density reached a maximum level limited by the number of carriers and their finite transit time inside the bulk material's depletion region. It was concluded that overcoming the saturation issue is only possible if uniform emission can be achieved.
In Chapter 4, a brand new and unique cathode design was developed that successfully solved all the problems caused by stray emitters. It was demonstrated that the new design provided a uniform and stable electron beam with a small divergence angle, resulting in a beam with low emittance and high brightness. This result is a significant advancement that outlines a feasible path toward utilizing CNT fiber electron sources for practical VED applications. More specifically, it was observed that the entire cathode surface of a radius of approximately 75 μm emitted uniformly (with no hot spots) in the direction of the applied electric field. From this, the normalized dc current brightness was estimated as BN = 3.7×1010 A/m2 rad2 using the estimated emittance of 52 nm rad. From this, the brightness in the pulsed mode, the preferable mode in most VED HPM applications, was predicted to attain a notable value of BN = 4.4×1015 A/m2 rad2 .
Journal Publications:
1. T.Y. Posos, O. Chubenko, and S.V. Baryshev, “Confirmation of Transit Time-Limited Field Emission in Advanced Carbon Materials with a Fast Pattern Recognition Algorithm”, ACS Applied Electronic Materials 3.11, 4990 (2021), doi:10.1021/acsaelm.1c00789
2. T.Y. Posos, S.B. Fairchild, J. Park, and S.V. Baryshev, “Field emission microscopy of carbon nanotube fibers: evaluating and interpreting spatial emission”, Journal of Vacuum Science & Technology B 38.2, 024006 (2020), doi:10.1116/1.5140602
3. M.E. Schneider, H. Andrews, S.V. Baryshev, E. Jevarjian, D. Kim, K. Nichols, T.Y. Posos, M. Pettes, J. Power, J. Shao, and E.I. Simakov, “Evaluating Effects of Geometry and Material Composition on Production of Transversely Shaped Beams from Diamond Field Emission Array Cathodes”, Appl. Phys. Lett. 122.5, 054103 (2023), doi:10.1063/5.0128148
4. M.E. Schneider, B. Sims, E. Jevarjian, R. Shinohara, T. Nikhar, T.Y. Posos, W. Liu, J. Power, J. Shao and S.V. Baryshev, “Ampere-class bright field emission cathode operated at 100 MV / m,” Phys. Rev. Accel. Beams 24.12, 123401 (2021), doi:10.1103/PhysRevAccelBeams.24.123401
Journals under Review:
1. T.Y. Posos, Jack Cook and S.V. Baryshev, “Bright Spatially Coherent Beam from Carbon Nanotube Fiber Field Emission Cathode”, arXiv 2301.06529 (2023), doi:10.48550/arXiv.2301.06529
Conference Proceedings:
1. Z. Li, S.V. Baryshev, T.Y. Posos, M.E. Schneider, and S.G. Tantawi, “RF Design of an X-Band TM02 Mode Cavity for Field Emitter Testing”, Proc. 12th International Particle Accelerator Conference (IPAC’21), 2961, JACoW Publishing (2021), doi:10.18429/JACoW-IPAC2021- WEPAB148
Conference Presentations:
1. T.Y. Posos, Jack Cook, S.V. Baryshev, “Enabling Bright Carbon Nanotube Fiber Field Emission Cathode”, 13th Annual Graduate Symposium (MIPSE 2022), Michigan Institute for Plasma Science and Engineering
2. T.Y. Posos, “High Brightness Carbon Nanotube Fiber Field Emission Cathode”, 34th International Vacuum Nanoelectronics Conference (IVNC 2021), IEEE
3. T.Y.Posos, O. Chubenko, and S.V. Baryshev, “Field Emission Microscopy of Diamond and Nanotube Materials”, 9th International Workshop on Mechanism of Vacuum Arcs (MeVArc 2021)
4. T.Y. Posos, S.B. Fairchild, J. Park, and S.V. Baryshev, “Field Emission Microscopy of CNTs Fiber”, 32nd International Vacuum Nanoelectronics Conference (IVNC 2019), IEEE
5. T.Y. Posos, S.B. Fairchild, J. Park, and S.V. Baryshev, “Field Emission Microscopy of Looped CNT Fiber”, 2019 Engineering Graduate Research Symposium (EGRS 2019), Michigan State University College of Engineering
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Yan Gong
Date Time:
Tuesday, April 11, 2023 - 1:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Wen Li
To date, a wide variety of neural tissue implants have been developed for neurophysiology recording from living tissues, and neural interfaces provide a direct communication pathway between nervous systems and machines. This direct communication pathway offers a new potential method to research neuron working mechanism, and to manipulate neuron activity. Simultaneously, many challenges, that raised up with rapid development of biomedical implants, need to be overcome. First, an ideal neural implant should ensure its own safety, which means minimizing the damage to the tissue and performing reliably and accurately for long periods of time. On the basis of safe implantation, better recording capabilities, flexible and configurable are required by future tools. For decades, many artificial neural interfaces evoke sensation in central and peripheral nervous systems (CNS and PNS respectively) by electrical signals. However, electrical stimulation has many limitations and difficulties, hardly considered the best solution for many cases, neural stimulation needs improved technology. Optogenetic, a rising role in field of neural interfaces, has proven its capabilities by direct optical stimulation of genetically modified target neuron population and achieving dramatical advantages comparing with traditional methods in spatial and temporal resolution.
This written report provides a development process towards an origami implantable recording array integrated with multiple micro-LEDs, and conduct systematic research on the challenges mentioned above, including but no limited to packaging technique, packaging material, and evaluation of encapsulation in reactive environments.
In order to systematically study package material and package technique, different materials properties are discussed for the chronic implantation of devices in the complex environment of the body, including biocompatibility, and moisture and gas hermeticity. This report summarizes common solid and soft packaging used in a variety of neural interface designs, as well as their packaging performances in term of electrical properties, mechanical properties, stability, biodegradability, biocompatibility, and optical properties.
For study reliable packaging for implantable neural prosthetic devices in body fluids. This report studied the stability of Parylene C (PA), SiO2, and Si3N4 packages and coating strategies on tungsten wires using accelerated, reactive aging tests in three solutions: pH 7.4 phosphate-buffered saline (PBS), PBS + 30 mM H2O2, and PBS + 150 mM H2O2 to simulate different inflammation situations. Different combinations of coating thicknesses and deposition methods to meet different design requirements were studied at various testing temperatures to accelerate the aging process.
Finally, these package techniques and material knowledge were used to fabricate origami neural implants. A 2D to 3D convertible, thin-film, opto-electro array with 4 addressable microscale light-emitting diodes (LEDs) for surface illumination and 9 penetrating electrodes for simultaneous recordings has been developed. The fabrication methods have been discussed with the electrical, optical, and thermal characteristics of the opto-electro array being quantified.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Abdullah Karaaslanli
Date Time:
Tuesday, April 4, 2023 - 2:30pm
Location:
2555D Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Selin Aviyente
Community detection and graph learning are two important problems in graph analysis. The former problem deals with topological analysis of graphs to identify their mesoscale organization; while graph learning aims to infer the interactions between nodes of a graph from data when the graph topology is not known a priori. Existing community detection and graph learning methods are mostly limited to single-layer graphs, where nodes are assumed to be connected with a single static edge. However, this assumption ignores the fact that many realworld relational data have multiple dimensions, which can be better represented with multilayer graphs. In this thesis, we propose various community detection and graph learning methods for different types of multilayergraphs.
In Chapter 2, we tackle the community detection problem in dynamic networks. Specifically, we focus on evolutionary spectral clustering, which extends spectral clustering to dynamic networks to learn a community structure that changes smoothly over time. We show the equivalence of evolutionary spectral clustering to a variant of dynamic stochastic blockmodel. For this purpose, we first introduce a novel dynamic SBM where the evolution of communities over time is modeled with pairwise Markov random fields. We then show that the log-posterior of the proposed model is equivalent to the quality function of evolutionary spectral clustering. This equivalence is used to determine the forgetting factor in evolutionary spectral clustering and to develop two new algorithms for dynamic community detection. The proposed algorithms are applied to both simulated and real-world dynamic networks and their performances are compared to state-of-the-art dynamic community detection methods.
Chapter 3 introduces a multilayer community detection method, which is especially tailored to handle multilayer brain networks constructed from electroencephalogram(EEG) data. In particular, we first construct functional multilayer networks from EEG data, where layers correspond to different frequency bands and interlayer edges are allowed between all brain regions. Next, a new multilayer modularity metric is defined based on a multilayer null model that preserves the layer-wise node degrees while randomizing the remaining characteristic cs of the network. The proposed modularity is parameterized with resolution parameter to handle the resolution limit of modularity, and interlayer scale parameter to control the importance of interlayer edges in community formation. Third, a group community detection method is proposed to find the common community structure for a set of subjects. The proposed multilayer community detection method is employed to identify the group level differences between the two response types during Flanker task, i.e. error and correct.
In Chapter 4, we present an algorithm to learn signed graphs, which we represent as a two-layer multiplex network where one layer corresponds to positive edges while the other to negative edges. The algorithm is based on graph learning approaches developed using graph signal processing. Existing graph learning methods rely on smoothness of graph signals over the graph; however, they are only capable of learning unsigned graphs. To this end, we propose a signed graph learning approach, that learns signed graphs based on the assumption of smoothness and non-smoothness of graph signals over positive and negative edges, respectively. The proposed method is further extended using kernels to take the nonlinear relations between nodes into account. From GSP perspective, this extension corresponds to assuming smoothness/non-smoothness of graph signals in a higher dimensional space defined by the kernel. The proposed approach is applied to the problem of gene regulatory network inference from single cell gene expression data. Experiments on simulated and real single cell datasets show that the method compares favorably with other single cell gene regulatory network reconstruction algorithms.
Chapter 5 addresses the problem of how to learn multiple signed graphs simultaneously. Existing GSP based GL approaches for this problem are limited to unsigned graph topologies. Therefore, we extend the algorithm developed in Chapter 4 to learn multiple signed graphs. In particular, given multiple datasets each of which includes graph signals associated with a signed graph, we assume smoothness and non-smoothness of graph signals as in Chapter 4. Furthermore, we assume that the signed graphs are similar to each other, which is ensured by regularizing the learned signed graphs through a learned signed consensus graph. The proposed method is employed for the joint inference of multiple gene regulatory networks from single cell gene expression data. Experiments on simulated and real single cell datasets show that the method performs better than methods that can learn a single graph at a time and previous joint gene regulatory network reconstruction algorithms.
In Chapter 6, we tackle the problem of learning multiple unsigned graphs from a heterogeneous dataset, which requires clustering graph signals while learning a graph for each cluster. Namely, we present an optimization problem for joint graph signal clustering and graph topology inference. The approach extends graph cut based clustering by partitioning the graph signals not only based on their pairwise similarities but also their smoothness with respect to the graphs associated with the clusters. The proposed method also learns the representative graph for each cluster using the smoothness of the graph signals with respect to the graph topology. Results on simulated and real data indicate the effectiveness of the proposed method.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Electrical and Computer Engineering Department at 355-5066 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Electrical and Computer Engineering
Name:
Adamantia Chletsou
Date Time:
Tuesday, January 31, 2023 - 9:00am
Location:
2219 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. John Papapolymerou
This dissertation demonstrates the implementation methods and performance of antennas on different substrates using the traditional lithography method and Additive Manufacturing (AM) techniques. The developed devices are used for biomedical applications and vehicular communications. The effectiveness of using photonic curing and reactive silver ink to develop 3D printed antennas on thermo‐sensitive substrates is investigated. Intense Pulsed Light (IPL) is used to cure silver nano‐particle ink on the automotive Acrylonitrile Butadiene Styrene (ABS) and the vero‐white polymer. Different curing profiles of IPL are tested on the ABS and the vero‐white to identify the optimal one. Development of antennas using lithography, Aerosol Jet Printer (AJP) combined with thermal curing, AJP combined with photonic curing, and AJP combined with reactive ink is investigated and their overall performance is compared.
The first step of this dissertation is to explore the antenna design that is optimal for biomedical, Radio Frequency Identification (RFID) applications, operating inside human muscle and in free space. The next step is the development of a dual‐band, planar antenna for automotive applications using lithography on a flexible, lightweight substrate and AM techniques on ABS. The antenna performance is tested on a real vehicle and the effects of the ground on the antenna radiation pattern are identified. Co‐Planar Waveguide (CPW) lines are developed using the same procedure to identify the losses due to silver conductivity. Thereafter, an Electrically Small Antenna (ESA) is developed on a 3D printed hemisphere for vehicular communications. Prototypes of this antenna are tested on a real vehicle and a ground plane inside a near field system. The effect of the vehicle body on the antenna performance is evaluated.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Electrical and Computer Engineering at 355‐5066 at least one day prior to the seminar; requests received after this date will be met when possible.
Department: Mechanical Engineering
Name: Amin Vahidimoghaddam
Date Time: Thursday, December 5th, 2024 - 8:30 a.m.
Advisor: Dr.Zhaojian Li
Nonlinear optimal control schemes have achieved remarkable performance in numerous engineering applications; however, they typically require high computational cost, which has limited their use in real-world systems with fast dynamics and/or limited computation power. To address this challenge, neighboring extremal (NE) has been developed as an efficient optimal adaption strategy to adapt a pre-computed nominal control solution to perturbations from the nominal trajectory. The resulting control law is a time-varying feedback gain that can be pre-computed along with the original optimization problem, which makes negligible online computation. This thesis focuses on reducing the computational cost of the nonlinear optimal control problems using the NE in two parts. In Part I, we tackle model-based nonlinear optimal control and propose an extended neighboring extremal (ENE) to handle model uncertainties and reduce computational cost. Nonlinear Model predictive control (NMPC), which explicitly deals with system constraints, is considered as the case study due to its popularity but the ENE can be easily extended to other model-based nonlinear optimal control schemes. In Part II, we address data-driven nonlinear optimal control and introduce a data-enabled neighboring extremal (DeeNE) to remove parametric model requirement and reduce the computational cost. As a pure data-driven optimal and safe controller, data-enabled predictive control (DeePC) makes a transition from the model-based optimal control to a data-driven one such that it seeks an optimal control policy from raw input/output (I/O) data without encoding them into a parametric model and requiring system identification prior to control deployment. The DeePC is considered as the case study, but the DeeNE can be easily extended to other data-driven nonlinear optimal control approaches. We also develop an adaptive DeePC and implement the DeeNE on a real-world arm robot.
Department: Mechanical Engineering
Name: Haritha Naidu Mullagura
Date Time: Monday, November 11th, 2024 - 11:00 a.m.
Advisor: Dr. Seung Baek
Pulmonary arterial hypertension (PAH) is a progressive and multifactorial disease characterized by pathological vascular remodeling, metabolic shifts, and dysregulation of key pathophysiological pathways. Predicting patient-specific responses to treatment requires a detailed understanding of pulmonary arterial mechanics, particularly the complex interactions between vascular geometry, hemodynamics, and pharmacological effects. However, most existing computational models are centered on healthy vasculature and fail to incorporate the influence of pharmacological treatment pathways in diseased states. To bridge this gap, we have developed a novel computational framework: a bio-chemomechanical model that integrates the essential biomechanical features of PAH-affected arteries and predicts arterial responses to various therapeutic interventions.
Our research group has previously established a healthy pulmonary arterial vasculature model using a homeostatic optimization process, an extension of Murray’s law. This optimization minimizes the total energy required to maintain blood flow, accounting for viscous dissipation, metabolic costs, and mechanical equilibrium constraints. By doing so, it generates a geometrically and energetically optimized arterial tree representative of a healthy physiological state. However, in contrast to the healthy vasculature model, which is the result of the optimization of metabolic energy consumption, there have been a growing body of evidence that the homeostatic stress status and metabolic energy consumption of residential cells are altered during the progression of PAH. For instance, studies have shown that pulmonary artery smooth muscle cells (PASMCs) in PAH shift towards glycolysis, even in the presence of oxygen, a phenomenon known as the Warburg effect. Mitochondrial dysfunction reduced oxidative phosphorylation, and decreased ATP production further disrupt energy dynamics in PAH-affected cell. Additionally, the upregulation of hypoxia-inducible factor (HIF) in PAH patients triggers cellular responses that promote vascular remodeling and metabolic shifts.
Therefore, rather than utilizing metabolic optimization, we create an in-silico PAH model using a data-driven approach, i.e., the work relies on the experimental data to inform its structural and functional changes, reflecting the complexity of the disease. Specifically, starting from the healthy model, we incorporate changes in geometry, hemodynamics, and pathological factors derived from experimental studies on PAH. Given the limited availability of metabolic cost data specific to PAH, we propose a set of testing hypotheses that computes metabolic energy consumption in the diseased vasculature, which enhance our understanding the role of metabolic process alteration by using existing literature.
Once the biomechanical structure of the PAH vasculature is established, we conduct an in-depth study of the chemical pathways involved in PAH treatment. This includes the development of mathematical models for key signaling pathways such as the nitric oxide-cGMP-PKG pathway, which plays a pivotal role in smooth muscle cell relaxation and vasodilation. Additionally, we perform pharmacokinetic analyses on various drugs, including PDE5 inhibitors, and Sotatercept, to evaluate their effects on the vasculature.
The resulting bio-chemomechanical model integrates these biomechanical and chemical processes, offering a comprehensive framework capable of predicting arterial responses to different PAH treatments. The model captures the dynamic interactions between hemodynamics, vascular geometry, and the pharmacological mechanisms underlying various therapies. By simulating these interactions, the model provides valuable insights into how different treatments impact arterial mechanics and can be used to guide personalized therapeutic strategies.
In conclusion, this integrated framework presents a promising tool for advancing personalized medicine in PAH management. By simulating both the mechanical and chemical responses of the pulmonary vasculature to various treatments, the model enhances our ability to predict patient-specific treatment outcomes. Moreover, it can be extended to explore other therapeutic pathways and vascular diseases, providing a versatile platform for future research into vascular remodeling and pharmacological interventions.
Department: Mechanical Engineering
Name: Amirreza Gandomkar Ghalhar
Date Time: Wednesday, November 6th, 2024 - 1:00 p.m.
Advisor: Dr. Patton Allison
This thesis presents a comprehensive study of liquid fuel flame topologies through the development and application of novel diagnostic techniques. The complexities associated with liquid fuel combustion, particularly in the context of aviation and aerospace applications, demand a deeper understanding of flame behavior and stability. Traditional diagnostic methods often fall short due to the intricate interactions between liquid droplets, flame surfaces, and multi-component fuel mixtures.
Our research focuses on addressing these challenges by introducing advanced diagnostic approaches to investigate the structure, stability, and extinction characteristics of liquid fuel flames. Key areas of exploration include the identification and analysis of reaction zones, the impact of vaporization dynamics, and the effects of turbulent flow conditions on flame
stabilization. To achieve this, we employ Laser-Induced Fluorescence (LIF) and chemiluminescence imaging, alongside advanced numerical image processing algorithms to capture high-resolution data on flame behavior. These methods enable us to discern fine details about flame front interactions, droplet vaporization, and localized extinction events. By refining these diagnostic tools, we aim to provide clearer insights into the parameters influencing flame stability, such as equivalence ratio, mixing efficiency, and preheat temperature.
In addition, the study integrates computational simulations using CHEMKIN to validate experimental results, allowing for a more comprehensive understanding of how liquid fuel combustion behaves under varying conditions of turbulence and strain rates. The combined experimental and computational approach ensures that the findings are both robust and applicable to real-world aerospace scenarios.
The findings of this study contribute to the broader understanding of liquid fuel combustion processes and offer valuable implications for the design and optimization of more efficient and stable combustion systems in aerospace applications. This research not only enhances our theoretical knowledge but also provides practical guidelines for improving flame diagnostics and combustion performance.
Department: Mechanical Engineering
Name: Mohamed Abdullah Alhaddad
Date Time: Friday, September 6th, 2024 - 10:30 a.m.
Advisor: Dr. Andre Benard
Modeling the rate of fluid penetration into capillaries due to surface tension forces is often based on the Poiseuille flow solution. However, this model does not apply to short capillaries due to non-fully developed conditions at the entrance and exit regions. Improved models are needed for small capillary systems, which are crucial in processes such as oil droplet removal from water using thin membranes. Previous research has addressed deviations from Poiseuille flow near the entrance and moving meniscus, including the use of momentum conservation equations and inertia forces in kinetic models for infinite flow entering capillary tubes. Some studies have considered finite reservoir infiltration, assuming parallel flow lines, but neglected local acceleration due to inertia and gravity effects. This study presents a novel analysis focusing on the dynamic behavior of droplets in pores. It models a finite flow reservoir associated with a droplet and includes drag forces at the capillary channel entrance. The mathematical model incorporates pressure losses due to sudden contraction and viscous dissipation at the tube entrance, which can be significant in low Reynolds number flows. Additionally, it considers energy dissipation due to contact angle hysteresis. The model addresses an apparent anomaly posed by Washburn-Rideal and Levin-Szekely, and is applied to various liquids including water, glycerin, blood, oil, and methanol. It is tested with different geometries and cases, including numerical simulations, showing close agreement with experimental literature. Deviations are observed when comparing infinite reservoir flow to finite droplet flow.
A parametric study evaluates the effects of dimensionless numbers such as capillary, Reynolds, Weber, and Froude numbers. Results suggest the Weber number's importance over the capillary number in droplet dynamics. The study also examines finite flow and film penetration in single pores versus pore networks. Computational simulations using ANSYS-FLUENT 23 R2 provide 2D results, using User Defined Functions (UDF) to capture liquid-gas interfaces. These simulations corroborate the mathematical model. Contrary to previous findings, this study demonstrates that contact angle effects are significant in the initial stages of capillary penetration. The proposed solution is valid for very short initial times, applicable to printing, lithographic operations, and filtration systems dealing with oil droplet removal from water using membranes.
Furthermore, the framework allowed us to examine two different approaches to delay lithium plating in graphite. A thermodynamic approach of hybrid anodes where we mix graphite with hard carbon and a kinetic approach of tunnels where we introduce synthetic channels in the electrode. Through our simulations, we identify that hard carbon particles act as a buffer for lithiation in hybrid anodes, delaying the surface saturation of graphite particles and thus delaying the lithium plating on graphite. On the other hand, creating tunnels generates easier paths for ion diffusion and therefore leads to better utilization of the electrode. Such channels in thick electrodes can generate high-capacity and efficient electrodes. Finally, the development of this framework culminates with a demonstration of full-cell simulations. In summary, simulating electrochemical processes in complex electrode microstructures is streamlined by the presented framework and offers a fast and robust tool for designing and studying microstructures.
Department: Mechanical Engineering
Name: Igor Igorevich Bezsonov
Date Time: Tuesday, August 27th, 2024 - 10:00 a.m.
Advisor: Dr. Siva Nadimpalli
Modern technology, from portable electronics to electric vehicles, is becoming increasingly reliant on lithium-ion (Li-ion) batteries for energy storage. This chemistry possesses a desirable combination of high power and high energy densities and is therefore widely used, but safety is still a significant issue. The risk of thermal runaway (TR) is a major roadblock to the widespread use of this technology. TR is a self-sustaining exothermic reaction which can be triggered by mechanical or electrical damage to a cell, overheating, or by latent defects from manufacturing. The volumetric changes within a cell’s electrodes and internal gas generation can be detected by strain measurements on the surface of the casing, which can complement the electrical and thermal data used by battery management systems (BMS) and even provide insight into the state of a battery when electrical contact has been lost. This research project demonstrates the utility of strain measurements to detect abnormal Li-ion cell behavior and precursors to TR. First, a baseline was established to identify the strain response of Li-ion cells under normal operating conditions, accounting for temperature and cycling rate (or C-rate) effects. Then, the cells were cycled under abuse conditions to identify signs of damage and identify signs of TR onset through strain measurements. The final step was to develop a model which used fundamental data and electrochemical input to predict the mechanical behavior of individual electrodes and full 18650 cells.
The samples used in this research were commercial 18650 format (18 mm in diameter, 65 mm tall) cylindrical cells with graphite-silicon anodes and nickel cobalt aluminum oxide (NCA) cathodes. Strain data was collected using strain gages bonded to the cell casing and was used to characterize their mechanical behavior during both normal and abuse cycling conditions. During a charge-discharge cycle at normal conditions, the surface strain was found to be nearly reversible – that is, the strain states at the beginning of charge and the end of discharge were almost the same. The strain profile of the cells was analyzed and found to be directly related to electrochemical reactions occurring within the electrodes, as evidenced by dQ/dV and dε/dV plots. The fact that the dε/dV peaks coincide with – and sometimes precede – the peaks in the dQ/dV plots shows that the electrochemical reactions occurring within the electrodes during charge and discharge can be sensed through strain measurements on the surfaces of cell casings.
With the baseline established, cells were then subjected to several abuse scenarios. During the first abuse scenario cells were overcharged to failure, which came in the form of current interrupt device (CID) activation. During overcharge (past 4.2V) the cell potential was seen to increase quickly and reached a plateau at approximately 5V, shortly after which the CID activated, and the cell became electrically inaccessible (0V). The cells’ surface strain also increased dramatically during this abuse scenario, reaching a value that was more than double the peak strain during normal cycling. The CID-activated cells were then heated to TR, during which two events were identified from the strain signature as signs/precursors to TR which could be used for prediction and prevention purposes. Cells were also repeatedly overcharged to 105% and 110% nominal capacity, named 5% and 10% overcharge (OC), respectively. Maximum strain, potential, and temperature were seen to increase slowly during the 5% OC experiments, and quickly during 10% OC, during which the CID activated after an average of 11 cycles. Strain at full discharge (referred to as residual strain) reached a progressively higher value after each OC cycle and was found to closely correlate to the pressure needed to activate the CID. Electrochemical impedance spectroscopy, dQ/dV, and dε/dV analyses confirmed that the degradation modes present were mostly caused by loss of lithium inventory processes. The insights gained from stress measurement, including the ability to predict CID activation, are discussed.
A finite element analysis modeling approach to predict the mechanical behavior of individual electrodes and full cells was developed. Electrochemistry was solved in COMSOL Multiphysics using a pseudo 4-dimensional (P4D) model to predict the cell potential and the state of charge of the active material within electrodes. Mechanics were coupled to electrochemistry through volumetric changes of the active material and a thermal strain analogy. The effective mechanical properties of the electrodes were calculated using the Mori-Tanaka homogenization scheme, with the development and assumptions explained fully in this work. The homogenized properties were compared to experimental and published results and were found to be in good agreement. Simulations for stress in graphite anode and nickel manganese cobalt oxide cathode were in agreement with published data. Predictions were also made for graphite-silicon anodes and NMC cathode and a geometry representative of an 18650 format battery. The limitations and future improvements for this model are discussed.
Department: Mechanical Engineering
Name: Aaron Feinauer
Date Time: Tuesday, August 20th, 2024 - 11:00 a.m.
Advisor: Dr. Andrew Benard
Extreme temperature exchangers capable of operating between 800°C and 1100°C and pressures greater than 80 bar are considered a critical component for ultra-high efficiency power generation and a range of next-generation industrial processes. A promising application for this research thrust is the use of carbon dioxide as a working fluid, whose critical point is at 73.8 bar and 31°C. As compared to traditional steam or air-based power cycles, a supercritical carbon dioxide (sCO2) cycle has less compression work near the critical point and higher cycle efficiencies which enables a smaller plant footprint. Given the extreme temperatures and pressures required for heat exchange, however, this poses a significant materials and system design challenge. This research seeks to develop an efficient and cost-effective test facility to enable the rapid testing and verification of heat exchangers within this temperature and pressure range while utilizing nitrogen as a surrogate fluid for carbon dioxide. A bench-scale test facility was first developed for moderate temperatures and pressures (100°C, 100 psi) for the purpose of developing friction factor and Nusselt number correlations for twisted S-shaped fins and for validating computational fluid dynamics (CFD) models of various fin configurations. A polyimide thermofoil heater was compressed between a mirrored system of additively manufactured heat exchanger plates fitted into a set of aluminum headers. A set of flat aluminum plates were used to compare against the twisted S-shaped finned plates made from titanium. Compared to other results within the literature, the correlations developed here for flat plates and finned surfaces are enhanced by the inlet impingement and outlet transition effects. The friction factor is up to 20.1 times larger for the flat plate correlations while the twisted S-shaped fins are up to 7.2 times greater than the literature would suggest. For the Nusselt number correlations the flat plate correlation are 6 times larger while the twisted S-shaped fins are up to 2.5 times larger than the literature would suggest. As compared to the experimental results, the CFD errors for friction factor are within -21.63% for the flat plate and -16.74% for the twisted S-fins. The maximum error in the Nusselt number for the flat plate is within +20.87%, while the twisted S-shaped fins have a maximum error on the order of -54.14%. The differences here between experiment and CFD are attributable to contact resistance effects between the heater and plate surfaces and the roughness of the printed fins. A 5 kW test facility was developed for heat exchanger characterization capable of operating at 250 bar, 300°C on the cold side and 80 bar, 1100°C on the hot side. The primary research within this work related to this facility is the development of process heat at high flow rates with a high inlet temperature, the management of the high temperature throttling process between the cold side and the hot side, and the optimization of the headers for integration with the heat exchanger. The development of process heat was achieved by a U-shaped graphite heating element with internal hexagonal channels that allow for prediction of heat transfer properties. Self-cooled nickel 200 alloy conductors are used which allow for the extreme inlet temperatures expected in sCO2 recuperative flows. The inlet conditions to the heater were as high as 450°C due to losses while the outlet flow was generally limited to less than 1100°C for the duration of the experiments at 80 bar. A thick sharp-edged orifice plate was used for high temperature compressible flow control at 7 g/s of N2 from 250 bar to 80 bar. A subset of research here attempted to develop the compressibility factors required for determining the flow rate and pressure drop relationship within a range of orifice diameters from 0.50 mm to 0.70 mm. Finally, a set of headers were developed with internal cooling channels and temperature monitoring to accommodate the extreme temperature and pressure conditions seen within the heat exchanger. A careful energy balance was performed to determine the best approach for optimizing the design and mitigating heat losses for more accurate heat exchanger characterization in future iterations of the design.
Department: Mechanical Engineering
Name: Muhammad Rubayat Bin Shahadat
Date Time: Monday, August 12th, 2024 - 3:00 p.m.
Advisor: Dr. Farhad Jaberi
Direct Numerical Simulations (DNS) of a spatially developing supersonic turbulent shear layer are conducted for a range of convective Mach numbers (Mc), velocity parameters (λ), and density Atwood numbers (A) to examine the effects of compressibility, advection and multi-fluid global density variation on the growth rate, self-similarity, flow statistics, asymmetry, and entrainment of the layer. At distant downstream locations, self-similarity is attained for all the examined cases. The self-similar region is identified by the collapse of normalized mean streamwise velocity, the constant peak of normalized Reynolds stresses, and the linear growth rate of the shear layer thickness as well as momentum thickness. Despite significant variations in the lower-order and higher-order statistics across different convective Mach numbers, velocity parameters, and density Atwood numbers, the profiles collapse within the self-similar region using our proposed self-
similar scaling. It is demonstrated that the observed numerical trends and profiles are consistent with the literature and can be explained via compressible self-similar equations and models.
The self-similar forms of continuity, streamwise momentum, transverse momentum, and energy equations have been formulated, incorporating both compressibility and centerline shifts. The self-similar normalized density distribution inside the layer is used to explain the effects of compressibility on various flow statistics including the far-field cross-stream velocity. The density variation is linked to dissipation effects as revealed by our analysis of the self-similar energy equation. An approximate equation for the cross-stream velocity is developed and the profiles of cross-stream velocity obtained from this equation are compared with the DNS results. A geometric interpretation of the entrainment ratio is presented and the approximate equation for the cross-stream velocity is used to provide the general expression of the entrainment ratio. The entrainment ratio increases with convective Mach numbers and velocity parameters, favoring excess entrainment on the high-speed side. Introducing global density variation in the multi-fluid flow enhances the layer asymmetry as compared to the single-fluid shear layer, meaning that the shear layer centerline and the peak of Reynolds stresses shift more towards the lower momentum side. Apart from enhanced asymmetry, the increase in global density variation causes more reduction in shear layer growth rate. A comparative study of the effects of compressibility and global density change on flow variables like mean density or cross-stream velocity reveals some of the interesting features of the simulated compressible multi-fluid shear layer. Despite significant differences in the lower and higher order statistics at different density Atwood numbers, the mean flow profiles collapse within the self-similar zone using our suggested self-similar scaling. A geometric interpretation of the entrainment ratio is presented, which helps to explain the decrease in the entrainment ratio with increasing Atwood numbers.
Department: Mechanical Engineering
Name: Anirudh Suresh
Date Time: Wednesday, July 24th, 2024 - 2:00 p.m.
Advisor: Dr. Kalyanmoy Deb
The typical aim of a multi-objective evolutionary algorithm (MOEA) is to identify a well-converged and uniformly distributed set of Pareto optimal (PO) solutions. This step is followed by a multi-criterion decision-making (MCDM) step where the decision-maker (DM) must select a desired solution for further consideration. We propose methods for the convenient execution of the above two steps. We present and compare several unique identifiers for PO solutions with respect to their properties and advantages and disadvantages in optimization, visualization, and decision-making. We propose methods to achieve a superior distribution of solutions in these spaces and demonstrate that a combination of these identifiers can be used during optimization. A well-represented set of PO solutions cannot be guaranteed at the end of optimization, and an incomplete PO front can be problematic for decision-making. We propose a machine learning assisted MCDM framework that can alleviate some of these issues. We also propose integrating these MCDM concepts into optimization to induce confidence in the achieved PO solutions.
Department: Mechanical Engineering
Name: Sai Guruprasad Jakkala
Date Time: Friday, June 28th, 2024 - 1:00 p.m.
Advisor: Dr. Andre Benard and Dr. S Vengadesan
A majority of equipment used in industry operate in the turbulent flow regime. Design of these equipment requires many iterations, often performed using computer simulations. Turbulence modelling is computationally expensive and time-consuming. In this study we investigate different turbulence models and their application in designing cyclone separators and novel plate heat exchangers. The performance of the various models are studied and the simulations are used to provide insight and guidance on the redesign of these two important systems. Hydrocyclones and heat exchangers are ubiquitous in industry.
A good understanding of the flow features in cyclone separators is paramount to efficiently use them. The turbulent fluid flow characteristics are modeled using URANS, Large Eddy Simulations (LES), and hybrid LES/Reynolds averaged Navier–Stokes (RANS) turbulent models. The hybrid LES/RANS approaches, namely, detached eddy simulation (DES), delayed detached eddy simulation (DDES), and improved delayed detached eddy simulation (IDDES) based on the k-omega SST RANS approaches are explored. The study is carried out for three different inlet velocities. The results from hybrid LES/RANS models are shown to be in good agreement with the experimental data available in the literature. Reduction in computational time and mesh size are the two main benefits of using hybrid LES/RANS models over the traditional LES methods. The Reynolds stresses are observed to understand the redistribution of turbulent energy in the flow field. The velocity profiles and vorticity quantities are explored to obtain a better understanding of the behavior of fluid flow in cyclone separators. The better prediction of turbulent quantities from the hybrid models can help in better modeling the multiphase interactions. Using the improved turbulent quantity predictions, we are able to design a cyclone separator for reduced erosion.
Supercritical CO2 cycles operating with high efficiency require new heat exchangers which can operate at high temperature (above 800oC) and high pressure (above 80 bar) with tens of thousands of hours of operation. In this thesis, we discuss modified metallic plate heat exchangers which can withstand high temperature and high pressure with new twisted S-shaped fins. Novel 3D twisted S-shaped fins are developed for better heat exchanger performance. The fins have a twist to induce a swirl in the flow resulting in enhanced heat transfer. Ni-based superalloy Haynes 214 is the material used for the heat exchanger plates and fins. The heat exchanger is manufactured using additive manufacturing processes. Turbulent Conjugate Heat Transfer simulations are carried out to obtain the temperature and pressure profiles in the heat exchanger in the turbulent regime. A parametric study is conducted to determine the performance of the newly developed 3D twisted S-shaped fins. The CFD results are compared with experiments.
The studies in this thesis resulted in an improved cyclone separator design which has improved operating life due to reduced erosion (maximum of 90%) without much compromise on the efficiency. 3D twisted S-shaped fins provide a better Performance Efficient Coefficient (PEC) than S-shaped fins. There is an improvement of 10%-13% better performance. There is considerable reduction (up to 75%) in the pumping requirement for 3D twisted S-shaped fins.
Department: Mechanical Engineering
Name: Michael Hayes
Date Time: Monday, April 29th, 2024 - 2:00 p.m.
Advisor: Dr. André Benard
The intermittency of renewable energy sources necessitates storage technologies that can help to provide consistent output on-demand. A promising area of research is thermochemical energy storage (TCES), which utilizes high-temperature chemical reactions to absorb and release heat. While promising, TCES technologies often rely on storing chemically charged materials at high temperatures, complicating handling and posing serious challenges to long-duration storage. A pioneering approach known as SoFuel (solid state solar thermochemical fuel) proposed using counterflowing solid and gas streams in a particle-based moving-bed reactor to achieve heat recuperation and allow flows to enter and exit the reactor at ambient temperatures. Previous work has successfully demonstrated operation of a reduction (charging) reactor based on this concept; this dissertation describes the development of a companion oxidation (discharging) reactor.
The countercurrent, tubular, moving bed oxidation setup permits solids to enter and exit at ambient temperatures, but the system also features a separate extraction port in the middle of the reactor for producing high-temperature process gas. A bench-scale experimental apparatus was fabricated for use with 5 mm particles comprised of a 1:1 molar ratio of MgO to MnO, a redox material that exhibits high oxidation temperatures (around 1000° C) and excellent cyclic stability. The experimental reactor system successfully demonstrated self-sustaining thermochemical oxidation at temperatures exceeding 1000° C. Many trials achieved largely steady operation, showcasing excellent operational stability during hours-long experiments. With the aid of user-manipulated inputs, the reactor produced extraction temperatures in excess of 950° C and demonstrated efficiencies as high as 41.3%. An extensive experimental campaign revealed thermal runaway in the upper reaches of the particle bed as a risk to safe, stable reactor operation.
To better understand reactor dynamics and evaluate potential control schemes, a three phase, one-dimensional finite-volume computational model was developed. The model successfully emulated behavior from the on-reactor experiments and further illustrated the impacts of the three system inputs - solid flow rate, gas extraction flow rate, and gas recuperation flow rate - on overall behavior. A five-zone adaptive model predictive controller (MPC) was developed using a linearized control-volume model as its basis. The controller sought to regulate the size, temperature, and position of the chemically reacting region of the particle bed through several novel approaches. These approaches were tuned and refined iteratively using the 1D computational model, after which they were successfully deployed on the experimental setup. Future work concerns scaling up the oxidation system for larger rates of energy extraction, further analysis of optimal reactor startup procedures, and alternative controller formulations.
Department: Mechanical Engineering
Name: Anshul Tomar
Date Time: Friday, April 26th, 2024 - 11:00 a.m.
Advisor: Dr. Ranjan Mukherjee
Bernoulli pads can create a significant normal force on an object without contact, which allows them to be traditionally used for non-contact pick-and-place operations in industry. In addition to the normal force, the pad produces shear forces, which can be utilized in cleaning a workpiece without contact. The motivation for the present work is to understand the flow physics of the Bernoulli pad such that they can be employed for non-contact biofouling mitigation of ship hulls. Numerical investigations have shown that the shear stress distribution generated by the action of the Bernoulli pad on the workpiece is concentrated and results in maximum shear stress very close to the neck of the pad. The maximum value of wall shear stress is an important metric for determining the cleaning efficacy of the Bernoulli pad. We use numerical simulations over a range of parameter space to develop a relationship between the inlet fluid power and the maximum shear stress obtained on the workpiece. To increase the shear force distribution, we explore the possibility of adding mechanical power to the system in addition to the fluid power. The flow field between the Bernoulli pad and the workpiece involves a transition from laminar to turbulent flow and a recirculation region. The maximum shear stress occurs in the vicinity of the recirculation region and to gain confidence in the numerical solver's ability to estimate these stresses accurately, experiments were conducted with a hot-film sensor.
A direct relationship was obtained between the maximum shear stress on the workpiece and inlet fluid power using dimensional analysis. A relationship between the maximum shear stress and the inlet Reynolds number is also obtained, and implications of these scaling relationships are studied. A direct relationship between the inlet fluid power and the shear losses motivates us to explore other methods of providing power to the system with the objective of increasing shear forces and thereby improving cleaning efficacy. We numerically investigate a Bernoulli pad in which additional mechanical power is added by rotating the pad. This additional power increases both the normal and shear forces on the workpiece for the same inlet fluid power. In the context of the rotating Bernoulli pad, it was found that for a given normal attractive force, a stable equilibrium configuration can exist for two different mass flow rates, with the higher mass flow rate resulting in a higher stiffness of the flow field. This phenomenon has not been reported in the literature. The shear stress distribution, obtained using numerical simulations, is validated using experiments for the first time. A constant temperature anemometer is used with a hot-film sensor and water as the working fluid; the sensor is calibrated using a fully developed channel flow. An experimental setup is designed to calibrate and later measure the wall shear stress in a Bernoulli pad assembly. The maximum wall shear stress is observed very close to the neck of the pad due toflow constriction and separation; the hot-film experiments accurately capture the magnitude of the maximum shear stress and its location. This provides us with confidence in the numerical solver, which can be used to optimize the Bernoulli pad design to improve its cleaning efficacy.
Department: Mechanical Engineering
Name: Saima Alam
Date Time: Monday, April 23th, 2024 - 10:00 a.m.
Advisor: Dr. Norbert Mueller
Air-conditioning systems consume significant portions of energy in an automotive system, hence any improvement in performance or efficiency of automotive air-conditioning systems contribute to the energy efficiency and design economy of the vehicle. There has been massive research interest in improving the design of individual components of HVAC systems for efficiency and many of these improvements have already been implemented. However, due to the non-linear and dynamic nature of automotive air-conditioning and cooling systems, there is still room for improving the efficiency of the integrated unit by improving the control strategy for such systems instead of focusing on individual components alone.
With the advancement in machine learning and programming capabilities there are now various novel control strategies and algorithms for non-linear systems in general. To apply these algorithms, black box models of the specific air-conditioning system are used from elaborate experimental data. Despite generating optimized control parameters, these methods provide little insight to the inner dynamics of the system and how they impact system behavior. For this reason, robust physics based dynamic model of automotive air-conditioning systems is required to formulate improved control strategies.
The goal of this research is to develop a transient model of the automotive heat pump system for cabin space conditioning including the non-static time delay features of the thermal expansion valve used as expansion device. A modular trans-critical vapor compression system built at MSU sponsored by Ford was developed to run with sub-critical refrigerants for experimental validation of the model and system identification tests. From the understanding of the thermal expansion valve dynamics a method was developed to control an electronic expansion valve to perform exactly like or better than the specimen thermal expansion valve in the system. The heat pump cycle simulation model results matched with experimental results with an acceptable error margin and the system coefficient of performance with the developed controller strategy for the electronic expansion valve was found equivalent of the cycle with the specimen thermostatic expansion valve. This work will enable easy conversion from TXV to EXV systems by recommending hardware features and control parameters for similar performance level in automotive systems. Furthermore, generalized transfer functions of the components were developed for analysis and recommendation of improved control strategy in automotive air-conditioning systems using thermal and electronic expansion valves.
Department: Mechanical Engineering
Name: Bryce Thelen
Date Time: Monday, April 15th, 2024 - 10:00 a.m.
Advisor: Dr. Elisa Toulson
Research into technologies directed towards the improvement in the efficiency of the internal combustion engine has been motivated over the past several years by the regulation of the United States automotive market to more stringent standards for fuel economy and emissions. Lean burn operation of spark-ignited (SI) internal combustion engines may have the potential to help meet the high fuel economy goals of the future decade by improving the efficiency of SI engines at partial loads. Although gains in efficiency are found for engines operating with diluted mixtures, these mixtures present difficulties that manifest themselves through the slow flame speeds and poor ignitability associated with lean or diluted air-fuel mixtures. Two types of ignition systems are examined here that attempt to mitigate these negative effects. They are a radio-frequency plasma-enhanced ignition system and a prechamber initiated ignition system called Turbulent Jet Ignition.
First, the effects of a plasma-enhanced ignition system on the performance of a small, single-cylinder, four-stroke gasoline engine are examined. Dynamometer testing of the 33.5 cm3 engine at various operating speeds was performed with both the engine’s stock coil ignition system and a radio frequency plasma ignition system. The radio frequency system is designed to provide a quasi-non-equilibrium plasma discharge that features a high-voltage pulsar that provides 400 mJ of energy for each discharge and voltages of up to 30 kV. Tests show improvement of the engine’s combustion stability at all operating conditions and the extension of the engine’s lean flammability limit with the radio frequency system. Particular attention is given to the improvements that the radio frequency system provides while burning lean air-fuel mixtures. Additionally, gas analysis of the 33.5 cm3 engine’s exhaust and high-speed images of the radio frequency system taken in a separate 0.4 liter optical engine are also presented.
Second, fully three-dimensional computational fluid dynamic simulations with detailed chemistry of a single-orifice turbulent jet ignition device installed in a rapid compression machine are presented. The simulations were performed using the computational fluid dynamics software CONVERGE and its RANS turbulence models. Simulations of propane fueled combustion are compared to data collected in the optically accessible rapid compression machine that the model’s geometry is based on to establish the validity and limitations of the simulations and to compare the behavior of the different air-fuel ratios that are used in the simulations. In addition to being compared to a companion experimental study, investigations into the effect of TJI orifice size and prechamber spark location are performed. The data generated in the simulations is analyzed and insights into the processes that make up the operation of the TJI are given. Finally, CFD analysis tools are applied to the early development and design of a TJI system intended for a heavy-duty diesel engine being converted to run on natural gas.
Department: Mechanical Engineering
Name: Philipp Schimmels
Date Time: Friday, April 5th, 2024 - 1:00 p.m.
Advisor: Dr. Andre Benard
Large-scale storage of renewable energy is necessary to increase reliability of this intermittently, but abundantly available resource. Of special concern is the storage of energy and its subsequent use in industrial processes requiring high temperature heat. A promising emerging technology is based on using redox reactions of metal oxides at high temperatures. The shelf-stable redox material MgMnO was identified as a potential candidate due to its high energy density, cyclic stability, high reaction temperature and good scalability. This work describes the conception, design, manufacturing, testing and improvement of a solid fuel reduction reactor used to charge the energy storage material MgMnO. The reactor enables continuous charging of the pelletized material via a packed bed moving through a 1500°C furnace. A counter-currently flowing sweep gas is used to separate the released oxygen from the charged material to prevent re-oxidation. It also acts as a heat recuperation carrier that cools charged particles and pre-heats particles before entering the reaction zone. This approach enables high thermal efficiency as the sensible heat is almost entirely recovered. A lab-scale reactor was built and tested successfully. Challenges such as particle flowability at high temperatures, fluidization of the bed, and low extent of reaction were encountered and solved by managing the counter-flowing gas and increasing the residence time of the particles in the reactor. The reactor output reached a maximum of 2500 W of charged chemical potential. Several models were developed and used to design experiments and validate the performance of the system. High energetic cost for separation of oxygen and sweep gas nitrogen was identified as a roadblock to improved efficiencies and potential scale-up of the system. This led to mathematical and experimental investigation of using water vapor as alternative sweep gas. Results show that water vapor is superior to nitrogen as a reducing agent and has a lower energetic cost of production. The proposed reactor can be scaled up and results of this study indicates that using the pelletized MgMnO pelletized material offers thermochemical energy storage at low-cost. The extraction of this energy at high temperature offers a path toward the decarbonization of a variety of industrial processes that are currently relying on the combustion of hydrocarbon fuels for high-grade heat.
Department: Mechanical Engineering
Name: Ru Tao
Date Time: Monday, April 1th, 2024 - 2:30 p.m.
Advisor: Dr. Michele Grimm
Vaginal childbirth, also known as delivery or labor, is the ending phase of pregnancy where one or more fetuses pass through the birth canal from the uterus, which is a biomechanical process. However, the risky process can cause significant injuries to both the fetus and the mother, such as brachial plexus injury, pelvic floor disorders, or even death. Due to technical and ethical reasons, experiments are difficult to conduct on laboring women and their fetuses. The use of computer modeling has become a very promising and rapidly growing way to perform research to improve our knowledge of the biomechanical processes of labor and delivery. The developed simulation models in this field have either focused on the uterine active contraction or the pelvic floor muscles, individually. In addition, there are many limitations existing in the current uterus models.
The goal of the project is to develop an integrated model system including the uterus, the fetus, the pelvic bones, and the pelvic muscle floor, which will allow advanced simulation and investigation within the field of biomechanics of fetal delivery. For the first step, a computational model in LS-DYNA simulating the active contraction behaviors of muscle tissue was developed, where the muscle tissue was composed of active contractile fibers using the Hill material model and the passive portion using elastic and hyperelastic material models. The model was further validated with experimental results, which demonstrated the accuracy and reliability of the modeling methodology to describe a muscle’s active contraction and relaxation behaviors. Second, a simulation model of a whole uterus during the second stage of labor was developed, which included active contractile fibers and a passive muscle tissue wall. The effects of the fiber distribution on uterine contraction behaviors were investigated and the delivery of a fetus moving through the uterus due to the contraction was simulated. The developed uterus model included several important uterine mechanical properties, such as the propagation of the contraction wave, the anisotropy of the fiber distribution, contraction intensity variation within the uterus, and the pushing effect on the fetus. Finally, an integrated model system of labor was established by incorporating the pelvic structures with the uterus and fetus models. The model system successfully delivered the fetus from the uterus and through the birth canal. The simulation results were validated based on available data and clinically observed phenomena, such as stress distribution within the uterus, values of Von Mises stress and principal stress of the pelvic floor muscles, rotation and movement of the fetus. Overall, a Finite Element Method model system simulating the labor process was developed in LS-DYNA, which will be used to investigate disorders related to labor, such as neonatal brachial plexus injury and maternal pelvic floor muscle injuries.
Department: Mechanical Engineering
Name: Eli Broemer
Date Time: Monday, April 1th, 2024 - 11:30 a.m.
Advisor: Dr. Sara Roccabianca
Bladder health and dysfunction is not well understood. Research with mouse models is an effective way to study soft tissue/organ function especially with the genetic tools available in this species. Despite this advantage, bladder research in mice is still lacking compared to other animal models. Particularly, mechanical testing/analysis of the mouse bladder tissue are near non-existent in literature. In this dissertation, experimental ex vivo pressurization of whole mouse bladders was used to analyze the mechanical stresses and stretches in the soft tissue. Bladder filling cycles were digitally reconstructed in 4D. The reconstructions were used to characterize the geometry and mechanics of the bladder as it fills. This work contributes to the bladder mechanics literature as this level of 4D and mechanical analysis of bladder filling in a mouse model has not been shown before.
Department: Mechanical Engineering
Name: Jonathon Winslow Howard
Date Time: Thursday, March 21th, 2024 - 12:00 p.m.
Advisor: Dr. Abraham Engeda
Operation of helium cryogenic systems below the normal boiling point of helium (approximately 4.2 K) has become a common need for modern high-energy particle accelerators. Nominal cooling near 2 K (or a corresponding saturation pressure of approximately 30 mbar) is often required by superconducting radio-frequency niobium resonators (also known as SRF cavities) to achieve the performance targets of the particle accelerator. To establish this cooling temperature, the cryogenic vessel (or cryostat) containing the SRF cavities is operated at the sub-atmospheric saturation pressure by continuously evacuating the vapor from the liquid helium bath. Multi-stage cryogenic centrifugal compressors (‘cold-compressors’) have been proven to be an efficient, reliable, and cost-effective method to achieve sub-atmospheric cryogenic operating conditions for large-scale systems. These compressors re-pressurize the sub-atmospheric cryogenic helium to just above atmospheric conditions before injecting the flow back into the main helium refrigerator. Although multi-stage cryogenic centrifugal compressor technology has been implemented in large-scale cryogenic systems since the 1980’s, theoretical understanding of their operation (steady-state and transient) is inadequate to provide a general characterization of the compressor and establish stable wide-range performance. The focus of this dissertation is two-fold regarding multi-stage centrifugal compressors as used for sub-atmospheric helium cryogenic systems. First, to develop a reliable performance prediction model for a multi-stage cryogenic centrifugal compressor train, validated with measurements from an actual operating system. Capabilities of the model include steady-state performance estimation and prediction of operational envelops that ensure stable and wide-range steady-state operation. Second, to develop and validate a process model of the entire sub-atmospheric system (e.g. FRIB) and establish a simple methodology to obtain a reliable thermodynamic path for the transient (‘pump-down’) process of reducing the helium bath pressure from above 1 bar to the operational steady-state conditions near 30 mbar. The effectiveness of the developed methodology is demonstrated by comparing the estimated and measured process parameters from the sub-atmospheric system studied (i.e. FRIB). The developed model and methodology are intended to benefit the design and operation (both steady-state and transient) of multi-stage cryogenic centrifugal compressor trains used in large-scale cryogenic helium refrigeration systems.
Department: Mechanical Engineering
Name: Md Sarower Hossain Tareq
Date Time: Tuesday, January 16th, 2024 - 11:00 a.m.
Advisor: Dr. Patrick Kwon and Dr. Haseung Chung
Nitinol is highly attractive for biomedical applications because of its unique shape memory and superelastic properties as well as acceptable biocompatibility. Additive manufacturing (AM) is getting significant attention in making complex and patient customizable nitinol devices. However, due to its high microstructural and compositional sensitivities, it is still challenging to fabricate functional NiTi devices via AM. It has been widely reported that evaporation of Ni, oxidation of Ti and formation of precipitation phases during fabrication significantly diverts the expected functional properties. To this date, laser powder bed fusion (LPBF) was the choice among many AM techniques to fabricate NiTi devices but successfully fabricated only on a NiTi substrate because of its poor bonding to other substrates (i.e., steel and Ti). In this work, a multi-step printing approach was systematically developed, which enabled printing NiTi on a Ti substrate using a very low laser energy density of 35 J/mm3 without any visible defect. This printing method reduced the high warping issue due to the process induced residual stress , avoided the Ni evaporation issue as well as formation of undesirable precipitation phase during printing. It was also found that a higher oxygen level in the printing chamber reduced the austenite finish (Af) temperature and negatively affected the printability. These results showed the feasibility of LPBF in printing NiTi on a substrate other than nitinol, providing a possible route to reduce the cost of NiTi fabrication via AM.
The as-printed NiTi sample exhibited distinct one-step phase transformation with the Af temperature of 2.1°C. To increase the Af temperature to 30.2°C (within the recommended range of Af temperature for biomedical applications), a heat treatment protocol was developed, which includes a solution cycle (at 900 °C for 1 hour) followed by an aging cycle (at 450°C for 30 minutes). The heat treatment protocol enabled to attain the homogenized microstructure while creating ultrafine metastable Ni-rich precipitate, Ni4Ti3, which facilitated the desirable phase transformation behavior with the increased Af temperature. The heat-treated sample showed narrower and sharper two-step martensitic phase transformation with the formation of intermediate R-phase. The presence of both Ni4Ti3 and the R-phase was confirmed by the transmission electron microscopic (TEM) analysis. In the superelasticity test at the body temperature, these samples, starting from the 2nd cycle, demonstrated a recovery ratio of more than 90% and a recoverable strain of more than 6.5%. After 10th cycles, the stable recoverable strain was 6.52% with a recovery ratio of 96%, which is the highest superelasticity reported for the LPBF processed NiTi to the best of our knowledge. After the initial deformation process, we expect these samples to attain near full superelasticity during service. The micro-hardness study also showed that the hardness of the heat-treated samples is less affected by the cyclic loading.
Nitinol stent is attractive since they are self-expandible and behave superelastically when deployed inside the body. In contrast to the multi-step conventional manufacturing route, AM is attractive in making nitinol stent since it provides one-step processing as well as wide option for customizable design. However, the individual strut of a stent is less than 150 µm which is very challenging to fabricate by LPBF with structural accuracy, mechanical integrity and maintaining proper superelasticity. In this work, the LPBF processing parameter as well as post surface finish has been systematically developed to minimize the porosity, avoid structural failure during deformation and maximize the superelastic property at body temperature. Finally, the processed thin strut showed the Af temperature of 26 °C (which is less than the body temperature) and demonstrated 91% strain recovery with 4.1% recoverable strain at body temperature.
The work presents an important roadmap in making NiTi devices by AM while maintaining excellent functional properties of NiTi for biomedical applications.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Department: Mechanical Engineering
Name: Mahdieh Tanha
Date Time: Wednesday, December 6th, 2023 - 9:00 a.m.
Advisor: Dr. Brian Feeney
This work is motivated by the undulatory swimming motion of fish, where the fish body is idealized as a mechanical beam, with external forces on the beam are due to fluid-structure interaction, and internal neuromuscular actuation. To this end, the purpose of this thesis is to investigate why a beam with specific properties and excitation can propel inside specific fluids while the same beam in a vacuum cannot propel. In particular, this study investigates whether the fluid-structure interactions in a flow can generate non-synchronicity of body wave, believed to be important in generating thrust, and evaluates the resulting thrust. The study is conducted on two aspects: (1) an investigation into characteristics that lead to thrust and a stable speed, and (2) a study on the influence of the fluid environment on lateral oscillation characteristics compared to the oscillation of the same beam in a vacuum.
The first phase of the thesis is conducted on identifying any relationship between the oscillating beam's slope and the fluid pressure on the beam, as the product equals the thrust distribution along the beam. We focused on the Lighthill force model and the Taylor force model, which are fundamentally different but are well known for this application. We found that non-synchronicity and an appropriate amplitude envelope of the beam's oscillation can lead to thrust production, specifically when the amplitude envelope and its spatial derivative are similar, when there is a single dominant mode which causes a nearly constant phase difference between pressure and body slope.
In the second phase, we seek whether the fluid's effect is naturally conducive to the production of traveling waves in the body. We looked at the transverse damping force of fins, and Taylor and Lighthill models of the fluid force on cylindrical immersed bodies and investigated the effects on the natural modal shapes and frequencies of beam, evaluating the existence of non-synchronicity and a constructive amplitude envelope (to thrust). We found that the resistive nature of the fluid significantly injects damping into the oscillation leading to non-synchronicity of oscillation, a reduced modal frequency, and an amplitude envelope that is a consequence of modal coupling. Application of transverse dampers with suitable damping coefficients and placement on the beam can help to increase these effects. However, the reactive nature of fluid is not seen to inject much damping into the system and does not strongly affect the modal shapes and frequencies unless the propulsion speed is high enough.
We conclude that the presence of fluid surrounding an oscillating beam changes its lateral oscillation properties and creates a pressure field around the beam in a way that can lead to thrust and propulsion with a stable speed if there is appropriate bending beam stiffness, density, dimensions, length, frequency, wavelength, fluid density and damping strength. An earthbound
case of successful propulsion is seen in the swimming of fish as soft beams in a moderately resisting fluid such as water.
Department: Mechanical Engineering
Name: Atacan Yucesoy
Date Time: Monday, November 27, 2023 - 3:00 p.m.
Advisor: Thomas Pence and Ricardo Mejia
The Effects of Mechanical Intrinsic Factors Induced by Morphogenesis on Brain Mechanics
The brain soft tissue is subject to large strains due to the shape changes that occur during growth. The process can be viewed as one in which tissue transforms from a locally stress-free reference configuration to a mature state exhibiting large elastic deformations. The biological growth alters the state of stress and leads to residual fields existing in the equilibrium state of the tissue after morphogenesis. Residual stress fields are inhomogeneous and anisotropic. This resulting stress field typically involves compressive and tensile stresses that vary through the material in a complex fashion. Hence, the mechanical response of residually stressed tissues to finite deformations differs from that of stress-free tissues. Furthermore, considering the role of mechanical properties of the tissues on the regulation of the essential behavior of the cellular structure, the residual fields have a potential role in mechanotransduction at the tissue and cellular scale. The residual fields also should be included when seeking to model the micromechanical mechanisms that give rise to brain injury. While the physical mechanisms of acute and secondary injuries led by extreme events (e.g., blunt impact, blast waves, cavitation, etc.) still remain unclear, it does seem clear that residual stresses could have a significant effect. For example, a preexisting tensile residual stress could accelerate the formation of microfissures during an episode of physical trauma, whereas a preexisting compressive residual stress field could provide some benefit in delaying fissure formation. It is issues of this type that motivate much of this research.
Research on residual fields in the cortex is quite limited, despite extensive experimental studies estimating residual stress fields in various tissues. The limited experimental findings generally indicate that gray matter (outer layer) experiences compression, while white matter (inner core) is subject to tension. The findings support the differential growth hypothesis where the gray matter is growing more than the white matter. It should be noted that the experimental insights are limited to specific cutting directions and regions, making a comprehensive assessment of residual stress/strain fields challenging. On the other hand, computational models have been extensively used in order to simulate cortical growth and folding processes to understand various aspects such as folding patterns, developmental abnormalities, underlying growth and folding mechanisms, and the role of physical and material properties on the final morphology. Still, there is a notable scarcity of computational models predicting the mechanical state of brain tissue during cortical growth, particularly including the extended folding regime.
The research work presented in this dissertation concerns the study of morphogenesisinduced residual stress fields in hyperelastic materials and the potential effect of these residual stress fields on the material response. This research is generally based on the non-linear theory of elasticity.
To address the effect of the residual stress field on the material response, the solution of a finite deformation boundary value problem for a residually stressed elastic spherical shell subject to pressure inflation is first provided. To this end, the general constitutive equation for an isotropic Mooney-Rivlin type of hyperelastic material with a background residual stress field is derived. Four residual stress fields with distinct levels of strength are considered. The problem is then expressed as a compact integral expression including the base response of the material and the response arising from the presence of the residual stress field. An asymptotic analysis is conducted to examine the dependence of residual-stress integrals on a dimensionless measure of radial strain. The results are compared with the base response of the Mooney-Rivlin type material to pressure inflation and the potential effect of the residual stress field on the material response is discussed. The numerical analysis shows that the residual stress fields have the potential to alter the qualitative behavior of the pressure-inflation response of the material.
While the just described analysis was quite general for residual stress fields that could arise from a variety of causes, the work then proceeds with the examination of residual stress due to differential growth in adjoining tissue in incompressible isotropic hyperelastic single and bilayer spherical shells. The kinematics of differential volumetric growth utilizing the incompressible hyperelastic framework are presented for each geometry considered, and the growth-induced residual stress fields are computed for five different growth conditions: area, surface, isotropic, and the combination of area and surface growth. Then, the sensitivity of the resultant stress field to the differential growth in adjoining layers is examined for the combination of the five growth conditions. In this analysis, the spherical symmetry is preserved during the growth. To address the residual stress fields generated by the morphogenesis including symmetry-breaking bifurcation and beyond, the study later continues by building an elementary computational model with idealized geometry, boundary conditions, and parameters. This 2D plane strain computational model provides the residual stress/strain fields emerging in a formation resembling sulcus-gyrus structure in a gyrified brain. In the finite element model, an initially flat bilayer rectangular model is utilized, which consists of a relatively stiff outer layer (cortex) and the inner core (subcortex). Following the differential growth hypothesis, the residual stress and strain fields are computed for the domain where the cortex undergoes only tangential (in-plane) growth while the subcortex does not experience any growth. A detailed stress and strain analysis of the resultant sulcus-gyrus formation is performed to understand morphogenesis-induced residual fields specifically for the sulcal floor and gyral crown. The computational results are consistent with previous experimental findings.
Due to the specific attention to physical injuries leading to the neuropathies such as Chronic traumatic encephalopathy (CTE) seen in the depth of the sulcus, the analysis is extended to encompass the response of non-residually stressed sulci subjected to intrasulcal deformations. A 2D plane strain computational model of a single sulcus is built to examine the deformations associated with the expansion of a cavitation bubble in the intrasulcal region. Based on the previously obtained experimental data, the quasi-static and transient pressure loading conditions are implemented to the gray matter-cerebrospinal fluid (CSF) boundary, and the response of the sulcus is investigated in detail. The findings demonstrate that cavitation result in sulcal expansion and the formation of localized high strain and strain rates at the depth of sulci. The strain and strain rate localization regions resemble the tauopathy / neurofibrillary tangles patterns seen in early CTE.
Department: Mechanical Engineering
Name: Lingyun Hua
Date Time: Wednesday, November 15, 2023 at 1:00 p.m.
Advisor: Guoming Zhu
ABSTRACT ECONOMIC ROUTE-SPEED OPTIMIZATION AND CONTROLS FOR CONNECTED ELECTRIC VEHICLES
This dissertation focuses on reducing vehicle energy consumption using optimal control and real-time optimization based on vehicle connectivity. The proposed methods include optimal vehicle transient motion control and eco (economic) route planning, where vehicle route and speed are optimized based on a proposed data-driven Grey-Box model considering vehicle speed, and its driving environment such as temperature, road grade, gust wind, etc. The two methods, optimal transient control and eco route planning form the whole route and speed optimization system. Vehicle transient motion control plays an important role in reducing energy consumption for hybrid and electric vehicles, as well as vehicles powered by internal combustion engines since vehicle acceleration and deceleration can be optimized based on the driving environment.
In this thesis, nonlinear quadratic tracking (NQT) control is used for optimal acceleration and minimal principle for deceleration to optimize energy recovery, where the acceleration control generates the optimal propulsion torque based on the current powertrain states and the error between vehicle speed and given reference provided by the connected system based on the surrounding traffic; and the deceleration (braking) control optimizes regenerative brake to maximize the recovered energy while obeying speed and braking distance constraints. Both control strategies are designed for real-time applications and can be updated online to respond to the rapid change in traffic environment using analytic solutions of optimal control. Computer-in-the-loop (CIL) and Hardware-in-the-loop (HIL) simulations validate their adaptability to reduce energy consumption and update to a changing traffic environment in real-time.
Considering the various system disturbances (e.g., road grade, gust wind, etc.) occurring during drive and model uncertainties due to model simplification and parameter errors ignored by the optimal controls above. In this thesis, a linear quadratic integral tracking (LQIT) control is utilized to generate regulation laws for both acceleration and deceleration operations to reduce tracking error. The LQIT acceleration control tracks the reference speed trajectory generated by the optimal acceleration strategy with minimal tracking error; and the LQIT deceleration control tracks the brake distance reference from the optimal braking control, achieves the target speed, and keeps brake distance below its reference for safety concerns. A unified Kalman filter is used to estimate system state based on noisy measurement. Simulation studies validate the proposed LQIT controls and indicate that the static tracking errors for both speed and distance are reduced with confirmation of being able to handle changing traffic environments.
To perform the vehicle eco route and speed optimization, a vehicle energy consumption model is necessary to estimate the energy usage. In this thesis, a Grey-Box vehicle energy
consumption model is developed based on vehicle dynamics with environmental influence based on the Kriging model. This model retains its high fidelity by utilizing basic vehicle dynamics in the model structure including rolling resistance, aerodynamics, gravity and energy consumption of air conditioning (AC) and heater as functions of environmental conditions such as temperature, wind speed, etc. The proposed data-driven model is trained based on Gaussian process assumption with a modeling error below 2.5\%. After the real-time model is trained, Recursive Least-Squares (RLS) algorithm is used to update the model based on new driving data to reflect the current vehicle status such as aging. The accuracy of the proposed Gray-Box Kriging model is verified in CIL simulation and a case study on vehicle route shows the capability of reducing energy consumption by using the Grey-Box model with changing environments.
Based on the developed Grey-Box energy model, a novel vehicle eco motion planning (VEMP) method is proposed to optimize the vehicle route and speed simultaneously for minimizing its energy usage with a given origin-destination pair and a travel time limit. The proposed VEMP method is based on the modified Dijkstra algorithm and gradient descent speed optimization to find and update the optimal route and corresponding speed profile in real-time based on the changing traffic and driving environment information. Co-simulation studies are conducted for the developed VEMP method in MATLAB with the SUMO traffic model using a real-world map. The simulation results show that for studied driving environments, the VEMP speed optimization is able to reduce energy usage, and results of five scenarios indicate that the VEMP can reduce total energy consumption. A sudden traffic jam study demonstrates the ability of real-time updating for the proposed VEMP method to handle sudden traffic changes such as vehicle cut-in.
Department: Mechanical Engineering
Name: Aakash Gupta
Date Time: Tuesday, November 7, 2023 - 10:00 a.m.
Advisor: Wei-Che Tai
THE INERTER PENDULUM VIBRATION ABSORBER: WITH APPLICATIONS IN OCEAN WAVE ENERGY CONVERSION AND HYDRODYNAMIC RESPONSE SUPPRESSION
The annual power incident on the ocean-facing coastlines of North America is over 400 GW. Capturing a small fraction of this energy can significantly contribute to meeting energy demands. Therefore, there is a renewed research interest in converting energy from ocean waves. Typically, ocean wave energy capturing devices, known as wave energy converters (WECs), are placed in deep water as the wave energy is higher in the deep water compared to shallow water. To reduce the cost of installing and maintaining WECs in deep water, they can be integrated with existing offshore floating platforms in the ocean. For such integration, traditional WECs, operating on the principle of linear resonance, have a natural period in heave close to a typical wave period to generate a large heave resonant response and hence high-efficiency wave power production, which causes large platform motions. In other words, wave power production and
hydrodynamic stability of the platform are conflicting objectives in traditional linear WECs. Therefore, simultaneous wave energy conversion and response suppression of the platform is necessary. To address this issue, in this work, a device known as an inerter pendulum vibration absorber (IPVA) is proposed combining the inerter with a parametrically excited centrifugal pendulum.
Two system variations are studied: the IPVA and IPVA-PTO, marking the absence and presence of an electromagnetic power take-off (PTO) system. Both the IPVA and the IPVA-PTO are integrated with a single-degree-of-freedom (sdof) structure: a primary mass, and a spar, respectively. The efficacy in suppressing vibrations is studied in the case of the sdof IPVA system, whereas wave energy conversion and response suppression are analyzed for the spar IPVA-PTO. For both systems, a nonlinear energy transfer phenomenon in which the energy is transferred between the primary mass (or spar) and the pendulum vibration absorber. For the sdof IPVA system, it is shown that the energy transfer is associated with the 1:2 internal resonance of the pendulum induced by a period-doubling bifurcation. A perturbation analysis shows that a pitchfork bifurcation and a period-doubling bifurcation are necessary and sufficient conditions for this internal resonance to occur. Harmonic balance analysis, in conjunction with Floquet theory, along with the arc-length continuation scheme, is used to predict the boundary of internal resonance in the parameter space and verify the perturbation analysis. Furthermore, the effects of various system parameters on the boundary are examined. Next, the sdof IPVA is compared with a linear benchmark and an autoparametric vibration absorber and shows more efficacious vibration suppression. For the spar IPVA-PTO system, a similar analysis shows the nonlinear energy transfer, which is used to convert the vibrations of the spar into electricity while reducing its hydrodynamic response. Similar to the IPVA, a period-doubling bifurcation results in 1:2 internal resonance, which is necessary and sufficient for nonlinear energy transfer to occur. The hydrodynamic coefficients of the spar are computed using a commercial boundary element method code. The period-doubling bifurcation is studied using the harmonic balance method. A modified alternating frequency/time (AFT) approach is developed to compute the Jacobian matrix involving nonlinear inertial effects of the IPVA-PTO system. The response amplitude operator (RAO) in heave and the capture width of the spar IPVA-PTO are compared with its linear counterpart, and the spar IPVA-PTO system outperforms the linear energy
harvester with a lower RAO and higher capture width. Experiments containing integration of the IPVA and the IPVA-PTO system with an sdof system (or ``dry" spar in the case of IPVA-PTO) are performed in order to verify the analysis.
Next, both the IPVA and the IPVA-PTO systems are integrated with a spar-floater combination and analyzed for their performance. Near the first resonance frequency, the sparfloater IPVA system shows a period-doubling bifurcation and energy transfer similar to the sdof IPVA system and outperforms the linear benchmark for hydrodynamic response suppression. On the other hand, the spar-floater integrated IPVA-PTO system is analyzed for its performance near both resonance frequencies. It is shown that near the first resonance, the spar-floater IPVA-PTO system's response undergoes a period-doubling bifurcation, and for small electrical damping, shows energy transfer. However, near the second resonance, secondary Hopf bifurcation is observed. A rich set of pendulum responses, such as primary and secondary harmonics, quasiperiodic, non-periodic, and rotation, are observed. Rotation provides the best energy conversion among all the identified responses. Finally, the electrical damping of the system is varied to find the optimal values for which the largest energy conversion occurs in the system, and it is found that the optimal electrical damping for energy transfer is associated with the pendulum's rotation.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the
seminar; requests received after this date will be met when possible.
Department:
Civil and Environmental Engineering and Mechanical Engineering
Name:
Aref Ghaderi
Date Time:
Tuesday, August 22, 2023 - 4:00pm
Location:
3540 Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Roozbeh Darganzany
Nowadays, cross-linked elastomers play a significant role in several industries such as aerospace, construction, transportation, marine, aeronautics, and automotive due to excellent flexibility, toughness, form-ability, and versatility. During their intended service-life, the material is supposed to sustain aggressive environmental damages induced by water infusion, temperature, and solar ultraviolet radiation (UV) during their operation, which affects their durability and properties.
A reliable design of rubber components to prevent early failure by environmental degradation requires digital simulations by means of high-fidelity thermo-mechanical constitutive models that can simulate the adverse effects of aging on mechanical, electrical, thermal, and failure properties of polymers. So far, most aging models are developed by coupling hyperelastic constitutive models with single-kinetic degradation models, to demonstrate the decay of materials during aging. However, a more detailed modeling approach can be achieved through modular continuum-based damage models that integrate the finite strain theory and thermo-mechanical degradation models.
Rubber elasticity theory is driven partly based on (i) statistical mechanics at micro-scale (ii) Phenomenological Modeling at Meso-scale for modeling of the network (iii) Continuum Mechanics at Macro-scale to model the material. So, hyperelastic models fall into three main categories: the phenomenological approach, the micro-mechanical approach, and the data-driven approach.
Recently, the emergence of machine-learned (ML) models has attracted much attention. The first generation of "black-box" ML models as another type of phenomenological model was proposed to model the mechanical behavior of rubbery media.
In solid mechanics, stress–strain tensors are only partially observable in lower dimensions. Thus, obtaining data to feed a black-box ML model is exceptionally challenging. Thus, these approaches soon become obsolete due to the high demand for data for training, and the lack of constraint on their output margins.
The issue can be resolved in a new generation of ML models which is inspired by physics-informed neural networks (PINN) which infuse physics-based knowledge into the black-box models. Here, we modify PINN models to develop hybrid frameworks that can address the limitations of both phenomenological and micro-mechanical models by obtaining micro-structural behavior from the macroscopic experimental data set.
The objective of this defense is to provide a new approach for reduced-order physics-based Data-driven modeling of multi-stressor damage in elastomers by infusing Knowledge into a neural network. The following are the major thrusts of our research in the proposed dissertation:
(i) To design a systematic approach to reduce order of the constitutive mapping and address the data volume problem for training.
(ii) To incorporate background knowledge from polymer physics, continuum mechanics, and thermodynamics into the neural networks and constraint the solution space.
(iii) To develop a neural network to predict various inelastic effects which is far less data-dependent, more interpretable than current PINN, and uses a knowledge-confined solution space.
(IV) To validate our proposed hybrid framework based on limited data to describe the relationship between elastomeric network mechanics and environmental degradation.
To go into further detail, the model has been successfully developed and validated in five different damage scenarios which describe the evolutionary process of developing the final platform. These steps are as follows, (I) Providing a model for polymers in non-extreme environments to capture the dependence of elastomer behavior on loading conditions such as strain rate and temperature, as well as compound morphology factors such as filler percentage and crosslink density, (II) developing a model for single mechanism aging, i.e. thermal aging, or hydrolytic aging, (III) developing a model to capture accumulation damages of fatigue and thermo-aging, (IV) introducing Physics informed neural networks (PINNs) to simulate multiple stiff, and semi-stiff ODEs that govern Pyrolysis and Ablation, and (V) developing a Bayesian surrogate constitutive model to estimate failure probability of elastomers.
The models used in the proposed platform are the first hybrid models developed and validated for polymer components and thus, bring great novelty and value to the industry. The model proposed in this work can significantly improve the design process of polymeric components by predicting the reliability, durability, and performance loss of materials based on the projected mechanical and environmental loading conditions. Such knowledge can significantly reduce the design cost, reduce the number of reliability tests needed, reduce the maintenance costs and overhauls, and most importantly prevent unexpected catastrophic failures.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Jun Guo
Date Time:
Thursday, August 24, 2023 - 12:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisors: Dr. Daniel Segalman and Dr. Wolfgang Banzhaf
Constitutive modeling of engineering materials is a prerequisite to making predictions about systems of which those materials are components. Often the analyst is faced with a new material or a traditional material in a state (strain, strain rate, temperature, etc.) for which there is no accepted constitutive model. In such cases the analyst must construct a constitutive model suitable to the purpose in an ad hoc manner, a task often dependent on individual experience or serendipity.
Here, we firstly explore a naive genetic programming approach to constructing constitutive equations suitable for engineering analysis, but the results of its direct application are disappointing. Next, a number of approaches are employed to address the problem in its components resulting in significantly better equations with respect to criteria regularly applied to assessing the utility of constitutive models. The improved approach is applied to constructing constitutive models for a metal (a yield function), two bio-materials (ligament and aorta tissues), and a geo-material (frozen soil). The approach developed here shows more benefits over a direct application of genetic programming as the material behavior becomes more complex.
An additional approach is introduced to generate the basis functions, which makes it easier to formulate the nonlinear behavior for engineering materials. It considers the separate effects of each variable and their interactions to formulate the material behavior more precisely. By using the basis functions, we can generate hierarchical models with varying conformity to experimental data, complexity, and condition number.
It is conventional to try to find a vector of parameters in the process of model calibration that yields an adequate fit with the calibration data and to use that for model predictions. For various reasons, including how one defines “adequate fit” (or even “best fit”) is quite arbitrary, there can be a subspace of equally plausible parameter vectors. A measure of merit for constitutive models is that though there may not be a unique acceptable parameter vector all plausible parameter vectors will be very similar. If this condition is not satisfied there may be substantial variance for the models that can be fit equally well by a multitude of parameter vectors and uncertainty quantification becomes impossible. The contribution of non-uniqueness of calibrated parameter vectors to meaningful prediction is illustrated on two different problems. A mathematical formulation for this measure of merit involving condition number of a Hessian matrix is proposed so as to incorporate this parameterization issue in the production of candidate constitutive models.
Multi-objective optimization is employed to generate constitutive models with good fitness, complexity, and condition number. The evaluation of one of these, the condition number, is computationally prohibitive when incorporated into the problem in a conventional manner. We developed an approach to alleviate this issue and generate models at great efficiency.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Tyler J. Bauder
Date Time:
Tuesday, August 15, 2023 - 11:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Patrick Kwon
Electron Beam Melting (EBM) is a relatively new Powder Bed Fusion (PBF) Additive Manufacturing (AM) process. Unlike a very similar laser PBF process, EBM process occurs an Ultra-High Vacuum (UHV) and high temperature (~700°C) chamber, reducing residual stress and providing superior protection against oxidation. This makes EMB ideal for processing oxygen sensitive materials like Ti-6Al-4V, whose high strength-to-weight ratio, corrosion resistance, and high temperature performance have drawn the interest of aerospace and other high-performance manufactures. Due to the nature of these industries, fatigue life is of particular interest. However, the relationship between EBM processing and fatigue life is not well studied and is thus the focus of this dissertation.
First, a L16 Taguchi Design of Experiments (DOE) was constructed to investigate the effects of Focus Offset, Line Offset, Speed Function, Hot Isostatic Pressing (HIP) treatment, and surface roughness on the Very High Cycle (VHC) fatigue life of Ti-6Al-4V. Two HIP treatments were 800°C and 200 MPa for 2 hours and 1100°C and 100 MPa for 2 hours with 2.5°C/min quench. Half of the samples were tested in the as-machined condition with an average roughness, Ra, of 0.2 μm and the other half were further polished using Magnetic Assisted Finishing (MAF) to Ra = 0.1 μm. An ultrasonic fatigue testing machine was used to test fatigue life at 500 and 550 MPa loads, with a load ratio of R = -1. Nearly 225 samples were tested with 7 repeats per load condition.
Fatigue results indicated that none of the machine parameters and surface roughness had a statistically significant correlation with fatigue life. However, a statistically significant correlation between HIP treatment and fatigue life was found. The 800°C samples performed as well as, if not superior, to conventional Ti64 with the average fatigue lives of 8.08E+07 and 3.28E+06 cycles for 500 and 550 MPa, respectively. While the 1100°C samples displaced significantly lower fatigue performance with the average fatigue lives of 7.21E+05 and 1.38E+05 cycles for 500 and 550 MPa, respectively. Microstructure and fractography investigations suggest that the poor performance of 1100°C samples can be attributed to coarsening of the prior beta (β) grains during the super-transus HIP treatment leading to the formation of large colonies of similarly orientated alpha (α) grains, allowing for easier dislocation movement across aligned preferential slip directions.
This study concluded that the most important factor controlling fatigue life of EBMed Ti-6Al-4V is post HIP/heat treatment and that fine-tuning of print settings beyond those required to prevent obvious porosity and swelling defects will not have significant effects on the fatigue life of HIPed Ti-6Al-4V.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Nicole Arnold
Date Time:
Monday, August 14, 2023 - 2:00pm
Location:
3112 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Tamara Reid Bush
Osteoarthritis (OA) is a debilitating musculoskeletal disease that causes degeneration of the joint surfaces. One of the most common areas of OA is at the base of the thumb, or the carpometacarpal (CMC) joint. CMC OA has been cited as a common cause of joint pain and disability which affected range of motion and strength of the hand. Due to loss of hand function, individuals had trouble carrying out activities of daily living which has resulted in a decrease in independence. Furthermore, CMC OA disproportionally affected females more than males, especially over the age of 55.
When conservative treatment options failed, surgical intervention may be necessary. The most common surgical option, ligament reconstruction with tendon interposition, was used to restore function and reduce pain for those who have thumb CMC OA. The effectiveness of surgery was commonly determined via patient questionnaires and clinical measurement devices. Clinical measurement devices to document changes pre- and post-surgery insufficiently captured the three-dimensional (3D) movement of the thumb and lacked accurate representation of isolated thumb forces. Relying on these clinical metrics have led to gaps in research associated with the thumb for both healthy and arthritic individuals. For the best treatment options and rehabilitation, new data and methods associated with thumb function are needed.
The objectives of this work were to: 1) identify the most appropriate mathematical method (Euler or body-fixed floating axis joint coordinate system methods) to obtain 3D motion patterns of the thumb, 2) determine and compare the motion abilities of the thumb in healthy males and females split into two groups (older healthy (OH) and younger healthy (YH)) and of those with CMC OA at three time points (pre-surgery, 3-months and 6-months post-surgery), and 3) compare isolated thumb force generation in males and females (OH and YH) and of those with CMC OA at three time points (pre-surgery, 3-months and 6-months post-surgery).
Results showed OA individuals utilized compensatory mechanisms to complete certain motion tasks compared to the healthy groups. This is most likely a result of pre-surgery ligament laxity and functional changes post-surgery at the CMC joint. Examination of force data showed that generally, only 50% of CMC OA participants improved at 6-months post-surgery compared to pre-surgery in their force abilities. Comparisons between healthy and OA groups yielded no significant impact on the amount of force generated at the three self-selected locations. Thumb pull forces were statistically larger across all groups. OH males and females produced larger isolated thumb pull forces compared the YH males and females. Additionally, wrist position only significantly impacted OH female force generation.
Overall, this work presents a novel, detailed method for data collection and analysis of thumb motion and force generation. This research provides clinicians with in-depth evidence to encourage individuals to pursue conservative treatment sooner and hand surgeons with more comprehensive information to create specialized treatment plans for those with thumb CMC OA.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Kian Kalan
Date Time:
Friday, August 11, 2023 - 1:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisors: Dr. Ahmed Naguib and Dr. Manoochehr Koochesfahani
Precision Airdrop Systems (PADS) face difficulties in controlling their landing accuracy when flow-induced vibrations of the suspension lines occur. Recent research has identified a previously unknown cause of these vibrations: galloping of the suspension cables. Galloping is a type of vibration that can occur in cylinders with non-circular cross-sections. The suspension cables in PADS have a cross-section that is approximately rectangular in shape with rounded corners, but with the added complexity of surface topology (due to braiding of the lines). Using load measurements, recent experiments have shown that the presence of surface topology can alter the stability of rectangular cylinders to galloping; an effect that is dependent on Reynolds numbers. Knowledge of the corresponding topology effect on the flow around the cylinders is presently lacking. Therefore, this study aims to investigate the impact of surface topology on the boundary layer and near-wake flow around a rectangular cylinder with a side-ratio of 2.5 and fully-rounded corners (half-circular leading and trailing edges). The Reynolds number based on the cylinder thickness (𝑑𝑑) is in the range 𝑅𝑅𝑒𝑒𝑑𝑑 = 800 − 2500. The surface topology is defined using spatial Fourier modes with an amplitude of 5% of 𝑑𝑑, applied along the perimeter only (2D geometry) and along both the perimeter and the span (3D geometry) of the cylinder. While not an exact replica, this surface topology represents the characteristics of the actual suspension cable reasonably well. The study also investigates the effects of different topology amplitudes by using cylinders with 2.5% and 10% of 𝑑𝑑. Single-component molecular tagging velocimetry is employed to measure the streamwise velocity and visualize the flow field at various locations above the surface and in the wake of the cylinder.
Mean and root-mean-square velocity profiles are analyzed to examine the development of the boundary layer and separated flow on the top and bottom surfaces of the cylinder. The mean separation bubble and the development of the shear layer unsteadiness over the surface of the cylinders are discussed at 𝛼𝛼 = 0° and at different Reynolds numbers. The results demonstrate the Reynolds number-dependent effect of the surface topology cross-sectional geometry and its variation along the span. An interpretation is provided of how these results could influence the galloping instability of the cylinder.
The wake flow is investigated to help better understand the relationship between wake structures, surface topology, and the characteristics of the boundary layer. To achieve this, wake mean and rms velocity profiles are interrogated and the effect of the geometry on the Strouhal number of the wake vortex shedding is analyzed. An examination is also conducted to investigate the unsteady flow physics of the boundary layer and its relationship to the wake flow. This examination uses quantitative measures and flow visualization, and focuses on the smooth-surface cylinder. The analysis identifies and compares different Reynolds number dependent boundary- layer flow regimes. The correlation between the wake vortex shedding structure and various boundary-layer regimes is examined and compared to established understanding in literature for a sharp-corner rectangular cylinder.
The results reveal that the details of the topology near the leading edge of the cylinder are most significant in affecting the behavior of the boundary layer flow. For the particular topology wavelength used in the present study, the biggest effect is found when a topology peak is present at the leading edge for the 2D (2Dp) geometry. In comparison to the smooth cylinder, the 2Dp topology substantially increases the separation zone thickness and the separated shear layer unsteadiness. The ensuing wake flow, exhibits an increased wake closure length, slower recovery of the mean centerline velocity, lower vortex shedding Strouhal number, and disrupted wake vortex organization.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Guangchao Song
Date Time:
Friday, June 23, 2023 - 2:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Patrick Kwon
Surface finishing is one of the most critical manufacturing processes as it influences the surface qualities, improving the corrosion and fatigue resistance of a product. Among many available surface finishing technologies, Magnetic-Field Assisted Finishing (MAF) is a promising finishing process that utilizes a slurry mixture made of ferromagnetic and abrasive particles in a liquid medium, also known as a brush. The brush attached to a magnetic tool directly interacts with the surface of a workpiece and removes surface imperfections and defects to achieve a desired surface finish. Due to the MAF’s recent inception, there is still a lack of understanding regarding the application of MAF on various metallic materials, large workpiece areas, and freeform geometries. In the presented study, optimal processing parameters were investigated and obtained on mold steels and sheet metal. The optimized parameter settings significantly improve the final surface roughness, from 434 nm to 26 nm for HP4M mold steel, from 1056 nm to 38 nm for chrome-coated sheet metal, and from 507 nm to 45 nm for AISI S7 steel, respectively. Subsequently, the identified parameters were implemented in the continuous setup that successfully finished the sheet metal samples with a larger area. This application of optimized parameters in the continuous setup enhances the effectiveness and efficiency of the overall finishing process. Finally, the study yielded the appropriate brush constituents to improve the efficiency of the MAF process, and simulations were conducted to explore the effects of the iron particle size on the brush constituents. The investigations demonstrated that the larger iron particles are subjected to a more powerful magnetic force. The current status of the MAF process is premature to be implemented in practical industrial applications. This work determined the optimal contents of the brush constituents which will contribute to making the MAF process more practical.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Akshay Shailendra Pakhare
Date Time:
Wednesday, June 21, 2023 - 2:00pm
Location:
3540 Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Siva Nadimpalli
The capacity and energy density of the current rechargeable batteries are not sufficient to meet the future energy storage demands. One of the strategies to solve this issue is to replace the existing electrode material with high performance materials. The commercial electrodes are composites which consist of active material particles (that are responsible for energy storage), a polymer binder with conductive additives which holds all the particles together and provides electrical network. Both the negative (i.e., anode) and positive (i.e., cathode) electrodes are composites. Graphite is the conventional active material in the anode, and the high performance materials such as Si, Sn, Ge are being considered as a replacement for graphite due to their energy density. For example, Si offers nearly 10 times more capacity compared to graphite (i.e., 3579 mAh/g compared to 372 mAh/g). However, these high-performance materials exhibit poor cyclic performance and undergo significant capacity fade, i.e., reduction of usable capacity with cycling. To address these issues, the dissertation has two broad goals: 1) to develop novel experimental methods for interface fracture characterization in batteries and 2) to develop a comprehensive multiphysics model for rechargeable batteries.
Capacity fade occurs in batteries mainly due to two different mechanisms: chemical and mechanical processes. The chemical process involves loss of active ions due to irreversible reaction resulting in the formation of a passivation layer called the solid electrolyte interphase (SEI). The mechanical process involves fracture of active material particles or failure of the interface between binder and active material particles. In this work, the focus is on the mechanical process and especially the failure of binder/active material interfaces in the negative electrode (i.e., anode). Although the binder/active material interfaces exist in both anode and cathode, large volume changes of anode materials make the interface failure a critical issue for anodes. For example, the most promising next generation anode material Si undergoes nearly 270% volume change during electrochemical cycling. This level of volume change causes interface failure and loss of electrical network in the electrode resulting in capacity fade. In spite of its importance, there is a lack of understanding on the interface failure in rechargeable batteries. In this study we developed a novel experimental method to characterize the interface failure behavior in lithium-ion battery system. Specifically, PVdF polymer was used as a binder and Si as active material in this model system. Samples for fracture characterization were prepared by depositing PVdF on Si substrate followed by a series of nanofabrication processes. The blister test samples fabricated in this process were tested in a novel electrochemical cell in conjunction with an in-house optical system based on Michelson interferometer principle. The samples were pressurized until the PVdF film delaminated from Si substrate. The mechanical response of the pressurized film was measured, and the PVdF/ Si interface fracture was characterized in terms of critical energy release rate Gc. The effect of thermal oxide (i.e., SiO2) on the interface failure behavior was investigated. Further, the same setup was used to determine the effect of galvanostatic electrochemical cycling of Si on the interface failure behavior.
The significant volume change behavior of the next generation high-performance materials during electrochemical cycling can generate stresses as high as 1 GPa. These high stresses in high-performance material undergoing large deformation affects the diffusion of ions in active material particle, affects the voltage of a battery, and also affects the electrochemical kinetics at the electrode/electrolyte interface. Theoretical models are necessary to develop high energy density and durable batteries for future energy storage demands. The current battery models account for the stress-potential coupling but assume steady state electrochemical kinetics. However, transient electrochemical kinetics are required to capture rate dependent electrochemical behavior usually observed in batteries during operation, i.e., when current is drawn at various rates during discharge process of a battery. Also, the existing models were developed based Li-ion batteries, and there is a need to extend the models to other battery systems (i.e., other chemistries such as Na-ion). Therefore, we have developed a theory for Li-ion and Na-ion electrode active materials. A diffusion-deformation model with transient electrochemical kinetics was developed and implemented in a finite element package.
By combining the experimental and modeling tasks outline above, this dissertation successfully characterized and simulated the failure behavior of binder/active material interface and attempted to predict the capacity fade behavior in rechargeable batteries.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Ahmed Yousef
Date Time:
Monday, June 19, 2023 - 9:30am
Location:
Zoom
Announcement:
ABSTRACT
Advisors: Dr. Maryam Naghibolhosseini and Dr. Mohsen Zayernouri
Adductor laryngeal dystonia (AdLD) is a neurological voice disorder that disrupts laryngeal muscle control during running speech. Diagnosis of AdLD is challenging because of the limited scientific consensus on accurate diagnostic criteria as it can mimic voice features of other voice disorders. The use of laryngeal high-speed videoendoscopy (HSV) as a powerful tool to capture the detailed vocal fold (VF) vibrations has been almost nonexistent to study AdLD and limited to sustained phonation, not connected speech in which AdLD’s symptoms manifest. The present dissertation aims to address the previous literature gap using HSV and provide, for the first time, quantitative analysis for the impaired vocal function in AdLD during connected speech. To accomplish this, HSV recordings were collected from vocally normal adults and AdLD patients during connected speech. Five different studies were implemented in order to analyze and extract clinically relevant information from these recordings.
The first study investigated the differences between AdLD and normal controls based on evaluating running speech durations in HSV over which VFs were visually obstructed by excessive movements of laryngeal tissues. To facilitate these analyses, a deep learning tool was developed to automatically classify HSV frames in terms of detecting visual obstructions in the VF images. The second study provided a new image segmentation tool for detecting VF edges during running speech in HSV. This tool was developed using a unique combination of the active contour modeling method and a machine-learning based method (k-means clustering) to segment VF edges in HSV kymograms. The third study developed a quantitative representation of VF dynamics in AdLD in running speech using HSV. A deep learning technique was used based on the tool developed in study two to segment the glottal area/edges and extract the glottal area waveform from the HSV recordings for analysis. The fourth study analyzed the pathological vocal function of AdLD during phonation onset and offset in connected speech using HSV. An automated approach was developed and validated with manual analysis to measure and compare the glottal attack and offset times between AdLD group and normal controls. Study five presented a one-mass lumped model that can estimate glottal area waveform and biomechanical characteristics of VFs based on HSV data.
The results of study one showed the accurate detection of the visual obstructions of the VF frames – facilitating the study of laryngeal activities in AdLD. The findings revealed that AdLD group exhibited longer durations of obstructions – making this measure a potential candidate for AdLD assessment. Also, indicating parts of connected speech that provide an unobstructed view of VFs allows for developing optimal passages for precise HSV examination and disorder-specific clinical voice assessment protocols. Study two and three demonstrated promising performance of the proposed automated tools to detect VF edges and analyze glottal area waveforms. These accurate techniques overcame the challenges involved in HSV analysis including the poor image quality during running speech and the excessive laryngeal maneuvers of AdLD. Future research should benefit from these newly developed automated tools for HSV analysis of VF vibrations in running speech to explore diagnostically relevant information in both vocally normal adults and AdLD. The findings of the fourth study revealed the accurate measurements of the glottal attack and offset times using the developed automated technique. The measurements showed significant longer attack time in AdLD and more variability of the attack and offset times in AdLD due to the irregularity of the VF vibratory behavior in this disorder. Accordingly, glottal attack time might be a compelling measurement of the severity of AdLD, which can be further investigated in future using the developed tool with larger sample size and, even for different voice disorders. Obtaining such measures in running speech opens up new lines of research to explore the clinical significance of these measurements and address the diagnostic challenges in AdLD. In the last study on modeling, the results show the successful optimization of the developed one-mass model to closely capture the characteristics of VF vibrations observed in the HSV running speech sample. The study uncovered the potential of this simplified model to estimate biomechanical properties of VFs with minimal computational cost non-invasively– paving the path for future research to utilize this model for analyzing connected speech samples and study the impaired VF dynamics in AdLD.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Hoi Ho Hawke Suen
Date Time:
Monday, June 5, 2023 - 1:00pm
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Patrick Kwon
Binder Jetting has been a promising additive manufacturing (AM) technique since it was first patented 30 years ago. The nature of its densification process is similar to powder metallurgy. It provides unique benefits compared to other methods, such as powder bed fusion and direct energy deposition, such as minimal residual stress, cost-efficient scaling up, higher powder reusability, etc. However, for most metal materials, the final density obtainable from the binder jetting process is low compared to other AM methods, making the technique only suitable for a few materials or limited in applications such as prototyping. This study implemented liquid phase sintering and a linear packing model to achieve high-density electrical steel with a pure elemental and pre-alloyed powder approach. Boron and silicon were used as additives to form a eutectic composition with iron to achieve liquid phase sintering. The elemental powder approach investigated the effect of boron and silicon on mechanical and magnetic properties with the ANOVA technique. The alloyed powder approach with boron & silicon as additives achieved a final density of 7.39 g/cc (98.4% of the theoretical density 7.51 g/cc), 8489.75 in maximum permeability, 0.053 Ws/kg for hysteresis loss at 1.5T, and a total loss of 34.39 W/kg for the frequency at 400Hz, 0.5T. Compared to Cramer et al. [1] of 7.31 g/cc in density (97.3% of the theoretical density of 7.51 g/cc), 10500 in maximum permeability, and 62.85 W/kg at 400Hz, 0.5T. With the processing parameters implemented, a stator with internal cooling channels was made with the joining technique. It shows that binder jetting is also a promising technique for fabricating electrical steels without limiting the preferred orientation offered by sheet lamination and a higher density than soft magnetic composites.
[1] Cramer, C. L., Nandwana, P., Yan, J., Evans, S. F., Elliott, A. M., Chinnasamy, C., & Paranthaman, M. P. (2019). Binder jet additive manufacturing method to fabricate near net shape crack-free highly dense Fe-6.5 wt.% Si Soft magnets. Heliyon, 5(11). https://doi.org/10.1016/j.heliyon.2019.e02804
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Farzaneh Tatari
Date Time:
Wednesday, May 10, 2023 - 9:00am
Location:
2555D Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Hamidreza Modares
Identifying a high-fidelity model of nonlinear dynamic systems is a prerequisite for achieving desired specifications in any model-based control design technique. This is because most control design methods rely on the availability of an accurate model of the system dynamics. Coarse dynamics models without generalization guarantees typically induce controllers that are either overly conservative with poor performance or violate spatiotemporal constraints imposed on the system when applied to the true system.
This dissertation investigates the finite-time identification of deterministic and stochastic systems. First, a novel finite-time distributed identification method is introduced for nonlinear interconnected systems. A distributed concurrent learning (CL) based discontinuous gradient descent (GD) update law is presented to learn uncertain interconnected subsystems' dynamics by minimizing the identification error for a batch of previously recorded data collected from each subsystem as well as its neighboring subsystems. The state information of neighboring interconnected subsystems is acquired through direct communication. Finite-time Lyapunov stability analysis is performed and easy-to-check rank conditions on the distributed memories data of subsystems are obtained, under which finite-time stability of the distributed identifier is guaranteed. These rank conditions replace the restrictive persistence of excitation (PE) conditions which are hard and even impossible to achieve and verify. Next, a fixed-time system identifier for continuous-time nonlinear systems is presented. A novel adaptive update law with discontinuous gradient flows of the identification errors is presented that leverages CL to guarantee the learning of uncertain dynamics in a fixed time. Fixed-time Lyapunov stability analysis certifies fixed-time convergence to the stable equilibria of the GD flow of the system identification error under easyto-verify rank conditions.
Moreover, an online data-regularized CL-based stochastic GD is also presented for discrete-time (DT) function approximation with noisy data. A fixed-size memory of past experiences is repeatedly used in the update law along with the current streaming data to provide probabilistic convergence guarantees with much-improved convergence rates (i.e., linear instead of sublinear) and less restrictive data-richness requirements. This approach allows us to leverage the Lyapunov theory to provide probabilistic guarantees that assure convergence of the parameters to a probabilistic ultimate bound exponentially fast, provided that a rank condition on the stored data is satisfied. This analysis shows how the quality of the memory data affects the ultimate bound and can reduce the effects of the noise variance on the error bounds. We also presented deterministic and stochastic fixed-time stability of autonomous nonlinear DT systems. Lyapunov conditions are first presented under which the fixed-time stability of deterministic DT systems is certified. Extensions to systems under deterministic perturbations as well as stochastic noise are then considered. For the former, the sensitivity to perturbations for fixed-time stable DT systems is analyzed, and it is shown that fixed-time attractiveness is resulted from the presented Lyapunov conditions. For the latter, sufficient Lyapunov conditions for fixed-time stability in probability of nonlinear stochastic DT systems are presented. The fixed upper bound of the settling-time function is derived for both fixed-time stable and fixed-time attractive systems, and the stochastic settlingtime function fixed upper bound is derived for stochastic DT systems. Finally, a fixed-time identifier for modeling unknown DT nonlinear systems without requiring the PE condition is developed. A data-driven update law based on a modified GD update law is presented to learn the system parameters, which relies on CL. Fixed-time convergence guarantees are provided for the modified GD update law under a rank condition on the recorded data. To guarantee fixed-time convergence, fixed-time Lyapunov analysis is leveraged.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Archana Lamsal
Date Time:
Wednesday, May 3, 2023 - 1:00pm
Location:
2555D Engineering Building and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Tamara Reid Bush
Sitting for long periods of time has health implications; two populations affected by long durations in the seated position include office workers and wheelchair users. Office workers suffer from cardio-vascular diseases and musculoskeletal disorders as a result of poor posture during prolonged sitting. Wheelchair users are also prone to various health issues including pressure injuries (PIs), for which shear loading and associated frictional forces are known risk factors. To address these issues, there is a need for developing an alternative working position which provides an opportunity for postural change in office workers and to study how the choice of fabrics used for the seat pan cover and pants worn by wheelchair users affect the frictional properties and shear forces at the seat interface.
The objectives of this work were: 1) to evaluate changes in body position, body loading, and blood perfusion while in a seated, standing, and new office seating position, termed the in-between position. 2) determine the coefficients of friction of seven commonly worn pant fabrics and two seat cover fabrics using a mechanical device and a tilting seat pan 3) to determine the shear force and coefficients of friction between five commonly worn pant fabrics and two seat cover fabrics through the development and utilization of a novel in-vivo experimental set up that permitted sliding of the human buttocks on the seat pan.
Various kinematic and kinetic analyses conducted in three different office working positions indicated that the in-between position provided a hip and lumbar position closer to standing than the seated position. Analysis of coefficient of friction using a mechanical device indicated that the office fabric seat cover produced smaller coefficient of friction than the vinyl seat cover with all pant fabrics. The in-vivo experiments also supported this result indicating that wheelchair users could benefit from an alternative seat cover material. Overall, this body of work provide a knowledge basis that will be useful in design of better office workspace and develop strategies that can reduce the risk of PI formation in wheelchair users.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Syed Fahad Hassan
Date Time:
Thursday, April 27, 2023 - 9:00am
Location:
Composite Vehicle Research Center Conference Room and Zoom
Announcement:
ABSTRACT
Advisor: Dr. Mahmoodul Haq
Thermoplastic polymers have seen rapid increase in automotive applications. Advances in nanofillers technology has seen these polymers compete with thermosets with respect to mechanical properties, light-weighting, emission control, precise manufacturing and high-volume processing. Unlike metals and thermosets, thermoplastics are relatively soft and their material response at intermediate strain rates (1 - 100s-1), commonly experienced in automotive crashes, is not well-documented. The tendency of thermoplastics to undergo large deformations before yield and failure, places a limitation on the type of apparatus which can be used to characterize their tensile response at these strain rates. This complex polymeric material behavior has led to an apparent lack of experimental techniques required to generate reliable tensile stress–strain data and a resultant absence of robust constitutive equations based ‘digital twins’.
To address this challenge, a three-pronged approach was implemented. First, a novel, symmetric, double-acting drop weight impact apparatus that allows for pure-tensile testing at desired strain rates was designed and developed ‘in-house’ at the composite vehicle research center (CVRC). Equipped with an accurate data acquisition system, this fixture allows for application of equal displacement on both ends of the test sample, which results in efficient stress transfer throughout its gage length and a smoother transition to dynamic equilibrium. Two in-line load cells were used on both ends of the sample to record load data and ensure symmetric load application. Digital image correlation along with high-speed camera was used for obtaining strain information. The data acquisition system was automated with an optical trigger to ensure repeatability of response and facilitate data processing.
Second, the test fixture was validated with Aluminum 6061-T6 data reported in the literature corresponding to two unique strain rates. The experimentally validated fixture was then used for the third part of the work that focused on intermediate strain rate characterization of five commonly used automotive thermoplastics. The thermoplastics were divided into three classes based on their stiffness and ductility. Further, the effect of nanoparticle inclusions on resulting tensile response of one select polymer (Acrylonitrile Butadiene Styrene - ABS) was investigated. Three nanoparticles, two graphene platelets and one carbon nanotube, were used at 1% wt. The baseline for the rate dependent response of all thermoplastics was established by initially testing them at different strain rates within the quasi-static regime. Next, all thermoplastics were tested at three strain rates corresponding to fixed drop heights of 10 in., 20 in. and 25 in.
Results show a homogenous strain field in the gage length of all samples tested, indicating a stable impact velocity and load rising rate. Further, the load recorded on both load cells was similar indicating symmetric loading. Importantly, little to no ringing was observed in the output load response eliminating the need for further signal processing.
In general, results indicate that with increasing strain rates, the tensile strengths increased whereas the failure strains (ductility) reduced. The material specific variations in strength and ductility for each polymer were different due to differences in microstructure and morphology. For example, at a strain rate of 27s-1 , the tensile strengths of ABS increased by 84% while failure strains reduced by 48%, compared to its quasi-static response. ABS nanocomposites exhibited improved strengths at higher strain rates relative to their quasi-static response. Nevertheless, it was lower than the pristine ABS response at similar strain-rate levels. This can be attributed to the improper dispersion of the nanoparticles as they were incorporated by mechanical mixing and no chemical compatibilization with host polymer was performed. Overall, the results showed that the new apparatus is reliable and repeatable for characterizing the tensile response of thermoplastic polymers at intermediate strain rates.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Sakib Iqbal
Date Time:
Wednesday, April 19, 2023 - 10:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Xinran Xiao
Sheet Molding Compound (SMC) is a type of ready to mold composites material. The most common SMC consists of glass fiber bundles about one inch long distributed randomly in a B-stage polyester resin. SMC possesses good mechanical properties and manufacturing flexibility in forming complex shaped parts and is relatively low cost, making it one of the attractive choices to replace metallic parts in automotive industry. Nevertheless, SMC composites have not been utilized in critical automobile components owing to the lack of a satisfactory predictive model, especially for crashworthiness simulations. The main challenges in analysis of SMC structures are the large scatter in mechanical properties and the large difference in strengths under different stress distribution or loading conditions. For example, SMC demonstrates 1.5-2 times higher strength under 3-point (3-pt) bending in comparison to uniaxial tension strength. This phenomenon is known as the size effect on strength and can be explained by Weibull’s statistical strength theory. For materials with large size effect such as SMC, simulations carried out with the mean mechanical properties (i.e., tension, compression, and shear data) would result in a significant underprediction of flexural responses of the structure. To improve the predictions, the statistical distributions of the mechanical properties need to be considered and the size effect should be examined. Although statistical analysis has long been considered in composite designs, probabilistic finite element (PFE) analysis based on statistical strength models has also been employed to consider uncertainties and design reliability at every scale in composites, little work has been done to examine the size effect of strength in FE simulations.
This work aims to incorporate the size effect in probabilistic simulations of SMC composite structures. First, we extended the unimodal Weibull strength model into multimodal one by combining the tensile and flexural Weibull strength models. This approach was examined with a glass fiber SMC composite. A randomization algorithm was developed to incorporate the strength distribution in PFE models. The strength distribution model was discretized into a limited number of segments and the values of the average strength for each segment and their probabilities were determined. The strength values were then randomly assigned across the integration points in the PFE model according to their probabilities. This approach successfully reproduced the tensile and flexural responses with the mean peak load, post peak behavior, and energy absorption similar to experimental results within ten iterations. Next, in addition to the tensile strength, the statistical distributions of the elastic modulus and compressive strength were also considered. The tensile strength and compressive strength were modeled by bimodal Weibull strength distributions corresponding to the uniaxial and 3-pt bending experiments. To determine the mixture weight fraction of the bimodal models and some difficult to measure parameters in the damage mechanics based composite material model, model optimization was explored using two techniques: (1) Artificial Neural Network (ANN)-based machine learning (ML) and (2) Random Search. It was observed that although computationally inexpensive, ANN-ML was rather complicated for a general-purpose regression. On the other hand, RS is easy to implement. Its high computational cost is acceptable as the optimization has to be done only once for any specific material model. The PFE models optimized with RS were examined with four verification cases including tensile, compression, 3-pt and 4-pt bending, and three validation cases including open hole tension, disk bending with a fixed boundary and with a simply supported boundary conditions. The PFE predictions agreed well with the experimental results across these load cases.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Tejas Patel
Date Time:
Tuesday, April 18, 2023 - 9:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisors: Dr. Lik Chuan Lee and Dr. Tong Gao
The human heart is a highly complex organ, and its primary function is to pump blood through the arteries, veins and to perfuse all other body tissues and organs, including itself. In the last decade, cardiac simulations have become increasingly crucial to gain clinical insight into cardiac function, treatment, and testing. Nowadays, multi-physics cardiovascular simulations applied to patient-specific modeling can help in the diagnosis of cardiovascular diseases and in studying relevant clinical treatments. Hence, our central objective here is to develop a generalized multi-physics finite element (FE) framework that includes thermal-fluid structure interaction coupling to study cardiac function and treatments.
First, we developed a stabilized FE based flow solver with heat transfer to study hemodynamics. A python based open-source FE library (FEniCS) is used from ground- up to custom-build the solver. We benchmark and validate the solver and study convergence for classical test cases at intermediate Reynolds and Peclet number.
Second, we utilize the solver to investigate cryoballoon ablation (CBA), which is a minimally invasive surgery that uses freezing or cryoenergy to treat atrial fibrillation (AF). To begin with, we use a patient-specific left atrium (LA) geometry and realistic pulmonary vein (PV) blood flow boundary conditions to validate hemodynamics of the LA chamber. Next, we position a cryoballoon (CB) at the pulmonary vein ostium to simulate incomplete occlusion during cryotherapy and investigate the factors affecting lesion formation. We observe that lesion size is highly sensitive to the CB position and balloon tissue contact area. The threshold gap for lesion formation is 2.4 mm. We also note that as the balloon tissue contact area increases, the surgery is more effective, and the power absorbed across the CB reduces.
Third, we extend our development to a fully coupled fluid-structure interaction (FSI) solver with heat transfer using FEniCS. The FSI solver (named vanDANA) that uses the immersed boundary (IB) method is based on the Distributed Lagrange Multiplier based Fictitious Domain method and the interpolation of variables is conducted using the smeared delta-functions. Additionally, the structure can be set as incompressible or compressible. We benchmark our solver and analyze the scalability on HPC. This builds a solid foundation for the future use of this solver.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Mahyar Abedi
Date Time:
Friday, April 7, 2023 - 11:30am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Andre Benard
As inexpensive and environmentally friendly technology, humidification-dehumidification (HDH) is an ideal candidate for water desalination due to its simple design and low energy requirements. With the ability to treat various types of compromised waters, the addition of a packed-bed medium enhances the desalination efficiency and system compactness, making direct-contact packed-bed HDH desalination systems a perfect fit for geographically distributed water desalination units and building integration.
The first part of this thesis focuses on modeling the behavior of a desalination unit and its integration with solar thermal systems, with a one-dimensional mathematical model developed and validated experimentally. Machine learning regression techniques are used to develop a data-driven surrogate model, which accurately predicts desalination performance but requires a larger dataset for high fidelity. A comprehensive assessment is carried out for the integration of an HDH system with a solar chimney, resulting in solar desalination chimneys. The assessment suggests that the pressure drop is a critical factor in the system's performance. A direct-contact packed-bed condenser shows a prominent desalination capacity. Small-scale configurations are ideal for household freshwater needs, while the large-scale can be implemented as sporadic water treatment plants in rural areas. Solar air heater systems are also studied for potential integration with desalination units, with an experimental flat plate solar air heater built and validated with 3D computational and 1D mathematical models. The investigation suggests that although the integrated system is more efficient (both thermal and desalination) compared to that of the solar desalination chimney, the dependency of the system on energy sources for the circulation of water and air is a significant drawback. This dependency can limit the system's autonomy and increase its operational costs.
The second part of this thesis investigates the integration of desalination units with buildings, specifically greenhouses. The greenhouse is integrated with a transparent solar water heater as a roof that absorbs the NIR waveband to increase the temperature of used or saline water and then passes the essential wavebands for plant growth. The hot water then flows through a water-treatment unit to produce potable water. Experimental pilots of the solar water heater are built, and models are developed to predict the behavior of the solar water panel meticulously. To incorporate the impact of spectral variation on lettuce as the case study, a dynamic growth model is developed that quantifies light spectrum variations. Changes in the light spectrum are accounted for via a new light-use efficiency parameter in the plant growth model. Then, several models are coupled to predict the behavior of an integrated greenhouse with a transparent solar water heater as a roof, a water treatment unit, and a spectral-incorporated plant growth model for lettuce in Phoenix, AZ. The models suggest that the transparent solar water heater on the roof reduces greenhouse ventilation load by about 30%, and the water treatment unit produces 35-40 kg of potable water daily, sufficient for single-row cultivation of lettuce. The integrated greenhouse has the potential to produce an average of 300 kg of fresh lettuce each month during the growth period, according to the plant growth model.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Department:
Mechanical Engineering
Name:
Royal Ihuaenyi
Date Time:
Thursday, April 6, 2023 - 10:00am
Location:
2555D Engineering Building
Announcement:
ABSTRACT
Advisor: Dr. Xinran Xiao
One key challenge in the deployment of future e-mobility systems is to ensure the safe operating condition of high-energy density batteries. Therefore, understanding battery failure mechanisms and reducing safety risks are critical in the design of electrified systems. Although the response of battery materials and systems under various conditions has been extensively explored in recent years, there are still a lot of challenges with developing models for predicting failures. One such challenge is the development of accurate thermomechanical models to predict battery failure caused by combined thermal and mechanical loadings. Such thermomechanical models aim to identify the thermomechanical failure condition of batteries through battery materials such as the separator. The structural integrity of battery separators plays a critical role in battery safety. This is because the deformation and failure of the separator can lead to an internal short circuit which can cause thermal runaway. In thermal runaway scenarios, the separator first expands and then shrinks before reaching its melting temperature. Furthermore, this shrinkage induces tensile stresses in the separator. Hence, developing a thermomechanical model that can predict the response of separators in their entire range of deformation is necessary.
Commonly used battery separators are dry-processed polymeric membranes with anisotropic microstructures and deformation modes that involve various physical processes that are difficult to quantify. These complexities introduce challenges in their characterization and modeling as their properties and structural integrity depend on multiple factors such as strain rate, loading direction, temperature, and the presence of an electrolyte. To predict the structural integrity of polymeric separators in abuse scenarios an understanding of the thermal and mechanical behavior of the separator is needed. Due to the multiple factors influencing the structural integrity of polymeric battery separators, developing models for the prediction of their thermomechanical response has always been challenging. Furthermore, computational models in form of user-defined material models are needed to account for these factors since existing material models in commercial software do not have that capability.
In this study, thermomechanical models are developed to predict the response of polymeric battery separators in thermal ramp scenarios. The time-dependent response of polymeric battery separators is taken into account and the material is modeled as viscoelastic in the deformation region before yielding and as viscoplastic under large deformations post-yield. As a first step, a linear thermoviscoelastic model, developed on an orthotropic framework was extended to account for the temperature effect and the plasticization effect of electrolyte solutions to predict the thermomechanical response of separators within the linear range of its deformation. In the developed linear orthotropic thermo-mechanical model, the temperature effect was introduced through the time-temperature superposition principle (TTSP). To account for the plasticization effect of electrolyte solutions on the thermo-mechanical response of the separator, a time-temperature-solvent superposition method (TTSSM) was developed to model the behavior of the separator in electrolyte solutions based on the viscoelastic framework established in air. Furthermore, an orthotropic nonlinear thermoviscoelastic was developed to predict the material response under large deformations before the onset of yielding. The model was developed based on the Schapery nonlinear viscoelastic model and a discretization algorithm was employed to evaluate the nonlinear viscoelastic hereditary integral with a kernel of Prony series based on a generalized Maxwell model with nonlinear springs and dashpots. Temperature dependence was introduced into the model through the TTSP. Subsequently, the developed nonlinear viscoelastic model was coupled with a viscoplastic model developed on the basis of a rheological framework that considers the mechanisms involved in the initial yielding, change in viscosity, strain softening and strain hardening in the stress-strain response of polymeric battery separators. The coupled viscoelastic – viscoplastic model was developed to predict the thermomechanical response of separators in their entire range of deformation before the onset of failure.
The material investigated in this work is Celgard®2400, a porous polypropylene (PP) separator. Experimental procedures were carried out under different loading and environmental conditions, using a dynamic mechanical analyzer (DMA), to characterize the material response, calibrate and validate the developed models. The developed thermomechanical models were implemented as user-defined subroutines in LS-DYNA® finite element (FE) package, which enables simulations with the thermal expansion/shrinkage behavior. Furthermore, analytical solutions were developed to verify the model implementation and predictions. The results from this study show that the model predictions of the material anisotropy, rate dependence, temperature dependence, and plasticization effect of electrolyte solutions agree reasonably well with the experimental data. The results also demonstrate that the non-isothermal simulations without considering the thermal expansion/shrinkage behavior of the separator resulted in large errors.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Alessandro Bo
Date Time:
Wednesday, April 5, 2023 - 10:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Wei Lai and Dr. Andre Benard
This PhD thesis presents an in-depth characterization of the magnesium manganese oxide redox system for energy storage applications. The study is divided into three main parts. Each one of them explores the features of the energy storage material at increasing length scales: starting from the pellet-scale (on the order of millimeters), then moving to the packed-bed scale (on the order of centimeters), and finally reaching the reactor-scale (on the order of meters), in which the energy storage concept is demonstrated in an experimental reactor.
The first part of the study deals with the experimental characterization and modeling of the magnesium manganese oxide redox system thermodynamics. The test sample is an individual cylindrical pellet in the 1:1 magnesium-to-manganese molar ratio composition. Its extent of oxidation is measured via a series of thermogravimetric experiments conducted at temperatures between 1000 and 1500 °C and oxygen partial pressures between 0.01 and 0.9 bar(a). The experimental results are used to develop two thermodynamic models that accurately predict the behavior of the redox system within these temperature and oxygen partial pressure ranges. Furthermore, these models allow to improve the material characterization by providing estimates on the average enthalpy and entropy of reaction. This study provides the minimum theoretical knowledge needed to develop computational models to predict and optimize the operation of such energy storage material when integrated in a large-scale reactor.
The second part of the study deals with the measurement of the effective electrical conductivity of a packed bed of magnesium manganese oxide pellets. During this experimental campaign, different pellet form factors (cylindrical and spherical) and compositions (1:1 and 3:2 magnesium-to-manganese molar ratios) have been examined. These measurements are performed using a four-wire technique at temperatures ranging between 1000 and 1500 °C under atmospheric pressure. This study demonstrates that, under such conditions, the energy storage material is electrically conductive. This result plays a crucial role in the development of fast charging strategies for energy storage systems based on the magnesium manganese oxide redox system. In fact, given its electrical properties, the packed bed can be thermally charged by directly passing an electrical current through it (Joule heating) instead of relying on external heating elements. This study provides valuable insights into the design and operation of such energy storage systems, and the findings have important implications for the development of more efficient and cost-effective energy storage products.
The third part of the study deals with the modeling and experimental validation of a thermochemical energy storage reactor based on the magnesium manganese oxide redox system. The model combines transient lumped (0D) species and energy governing equations for both the solid and gas phases within the packed bed, with 1D axial and radial transient heat conduction equations within the reactor insulating layers. The model is validated using the experimental data collected during a redox cycling campaign of a nominally 1 kW/5 kWhth reactor based on the magnesium manganese oxide redox system. The redox material chemical kinetics is modeled using an equilibrium kinetics approach. Experimental correlations are also used to validate the pressure drop measured across the packed bed upon system discharge. This model provides a starting point for the design and optimization of commercial-scale energy storage systems based on the magnesium manganese oxide redox system.
Overall, this PhD thesis provides a foundational understanding of the magnesium manganese oxide redox system behavior at different length scales, starting from the pellet-scale, moving to the packed bed scale, and finally reaching the reactor-scale. The results of this study have significant implications for the development of efficient and scalable thermochemical energy storage systems.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.
Email sandra@msu.edu for Zoom information
Department:
Mechanical Engineering
Name:
Javad Hosseinpour
Date Time:
Tuesday, April 4, 2023 - 10:00am
Location:
Zoom
Announcement:
ABSTRACT
Advisor: Dr. Abraham Engeda
Since the 1960s the idea of using supercritical carbon dioxide (s-CO2) as the working fluid in a Brayton power cycle has been entertained. But due to technical limitations of the time, the idea did not progress forward much. Presently, due to the availability of more knowledge, better technological platform, and advanced analysis tools, many believe it is time to revisit the idea of using carbon dioxide as the working fluid for power generation. Theoretically, the concept of a closed-loop s-CO2 Brayton cycle is highly attractive and promising; however, there is yet a major hurdle to be passed, namely the designing, developing, and testing of a reasonable size (10 MWe or higher) prototype of an s-CO2 Brayton-cycle-based power gas turbine. Specifically, designing a stable s-CO2 compressor is one of the main challenges that need to be addressed.
In this dissertation, a supercritical CO2 Brayton cycle design tool in Microsoft Excel coupled with CoolProp real gas NIST database was developed to optimize and analyze the power cycles as well as obtain the best operating conditions for an s-CO2 compressor working in a 10 MWe power cycle. Then, three s-CO2 Brayton cycles, namely simple recuperated, recompression, and dual turbine cycles were reconfigured to produce 11.11 MW (10 MWe) output net power. The results were compared to the conventional Brayton cycle as the basic s-CO2 layout. It was shown that the recompression cycle had the highest efficiency, but the highest back-work ratio and the lowest specific work.
Furthermore, the reconfigured simple recuperated cycle had a thermal efficiency of 43.2% with a specific work of 125.13 kJ/kg, which is in a moderate range between the dual turbine and recompression cycles. The lower capital cost of the simple cycle suggests it could be a viable option for commercialization. Furthermore, a new compressor design procedure was introduced and developed for s-CO2 centrifugal compressors with a pinched diffuser under ondesign and off-design conditions in MATLAB. The developed codes aimed to obtain a stable supercritical CO2 compressor design and to predict the performance of s-CO2 compressors by considering Span-Wagner real gas equation of state, condensation limit, as well as internal and external losses. The procedure was validated with experimental results for an air compressor and Sandia's s-CO2 compressor to examine the validity of the meanline code. The efficiency and pressure ratio obtained from the 1-D code were compared to CFD results and showed reasonable agreement with experimental data. It was found that there was an overprediction due to not considering the volute in the design at higher mass flow rates. By comparing the total-to-static efficiency of Sandia’s compressor with 1-D code and CFD, it was found that while the CFD results match the experimental data, the code could not calculate the total-to-static efficiency of Sandia’s compressor for the mass flow rates below 2.5 kg/s.
Besides, a new impeller with a vaneless pinched diffuser was proposed, which achieved a compressor efficiency of 90.45% with an excellent operating range of 47.8%. The results matched well with simulations for different mass flow rates at the design speedline of 20,000 RPM. Additionally, the internal behavior of s-CO2 was studied at the choke condition and a new analogy between the compressor passage and a converging-diverging nozzle was made for the high limit of the performance map. Besides, a loss analysis in the proposed s-CO2 compressor was performed, revealing that 75.8% of the total enthalpy loss was due to internal losses. Finally, the condensation contours were studied and the results highlighted that condensation is unavoidable in an s-CO2 centrifugal compressor; however, the condensation does not cause damage or affect the compressor's performance.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.