Department: Biomedical Engineering
Name: Shijun Liang
Date Time: Monday, January 27th, 2025 - 9:00 a.m.
Advisor: Saiprasad Ravishankar
Magnetic Resonance Imaging (MRI) is a critical tool in medical diagnosis and treatment planning due to its excellent soft tissue contrast and non-ionizing nature. However, MRI faces challenges like prolonged scan times and data acquisition constraints arising from patient privacy concerns and heterogeneous medical data. This thesis introduces computationally efficient deep learning algorithms to address these challenges in two parts.
In Part I, we focus on MRI reconstruction under limited or no data availability. For limited data, we propose the LONDN MRI method, which trains on a small set of adaptively chosen neighboring images, achieving superior performance compared to supervised models like MoDL, with significant improvement. For data-free scenarios, we develop Self-Guided DIP and Autoencoding Sequential DIP (aSeqDIP), which leverage self-regularization and sequential U-Net architectures to improve both performance and efficiency, outperforming traditional supervised models.
In Part II, we enhance the robustness and generalization capabilities of medical imaging models using a combination of randomized smoothing and diffusion-based purification. We introduce SMUG, an unrolling method that mitigates worst-case perturbations and data variations such as mask shifts and noise. Additionally, our Diffusion Purification framework effectively removes noise in biomedical lesion data, surpassing adversarial training and other robustness methods.
These contributions advance MRI reconstruction and robust medical imaging, addressing critical limitations in clinical workflows.
Department: Biomedical Engineering
Name: Stephen Branch
Date Time: Tuesday, November 12th, 2024 - 10:00 a.m.
Advisor: Dr. Dana Spence
With over 10.5 million units of red blood cells (RBCs) transfused in 2021 in the United States alone, blood transfusions are one of the most common hospital procedures. These life-saving interventions are necessary to treat a variety of conditions that result in decreased hemoglobin levels. Common causes include anemia, hemoglobinopathy, cancer, chemotherapy, radiotherapy, and blood loss from trauma or major surgeries. Despite centuries of research into the storage of RBCs for transfusion, current methods cannot prevent degradation of these cells. Detrimental biochemical and physical changes occur after even short periods of storage. This collection of harmful storage-induced changes is known as the storage lesion. The storage lesion can be broadly categorized into oxidative damages and metabolic impairments. Oxidative damages include generation of reactive oxygen species, lipid and protein oxidation, and degradation of cellular structure leading to severe morphological changes. Metabolic impairments lead to accumulation of lactate, acidifying the cellular milieu, and decreases in adenosine triphosphate (ATP) and 2,3-diphosphoglycerate levels. Transfusion of RBCs significantly impacted by the storage lesion raises questions of patient safety. Though contemporary transfusion medicine has mitigated clinical complications from these procedures, they are not without risk. Complications range from transfusion-transmitted infections to fatal acute reactions, such as transfusion-associated circulatory overload. Minimizing the number of transfusions required is a key objective in blood banking research. This may be achieved by improving the efficacy of transfusions through reduction of the storage lesion. This is addressed in this work through the use of modified additive solutions, which are used to prolong viability of RBCs in storage, and investigation of post-storage cellular rejuvenation. The additive solutions used today contain extreme amounts of glucose, ranging from 45 mM to 111 mM. Such hyperglycemic conditions have been implicated in the development of various aspects of the storage lesion. Previous reports have demonstrated that a normoglycemic additive solution containing just 5.5 mM glucose is effective in reducing oxidative stress and osmotic fragility in stored RBCs as well as increasing ATP release and cellular deformability. As glucose is metabolized throughout storage, an RBC feeding system was previously developed to automate maintenance of normoglycemic conditions. However, aspects of the design of this system limited experimental control and regulatory compliance, and therefore translational potential. Here, a second-generation RBC feeding system is developed and employed in additional normoglycemic RBC storage studies. Beyond validating the performance of this system, novel benefits of normoglycemic storage such as reduced cellular glycation and hemolysis are confirmed. Expanding on previous studies, rejuvenation of RBCs stored under these conditions via post-storage washing is investigated. This rejuvenation results in significant improvements to the health of stored RBCs; both cellular deformability and morphology were consistently restored to near-normal.
Department: Biomedical Engineering
Name: Vittorio Mottini
Date Time: Tuesday, August 27th, 2024 - 11:00 a.m.
Advisor: Prof. Jinxing Li
The rapid advancement of wearable technology has introduced a new era of human-machine interaction, with soft bioelectronics emerging as a novel field at the intersection of materials science, electrical engineering, and healthcare. Soft bioelectronics offers unprecedented opportunities for seamless integration with the human body, promising to transform personal health monitoring, medical diagnostics, and human-machine interfaces. These flexible and stretchable electronic systems conform to the complex topography of human skin, adapting to its constant motion and deformation while minimizing mechanical stress on tissues. This adaptability enables long-term, comfortable wear for continuous physiological monitoring, advanced prosthetic control, or novel human augmentation, overcoming the limitations of rigid electronic systems. Despite significant progress, challenges persist in developing skin-interfaced electronics that maintain high performance across diverse skin conditions and age groups. This dissertation presents the development and evaluation of "InSkin," an innovative, inclusive skin-interfaced electronic platform designed for high-fidelity, high-density, multi-channel electrophysiological recording. The InSkin technology addresses critical challenges in current skin-interfaced electronics, particularly the variability in signal quality across diverse skin conditions and age groups. A novel conductive polymer composite, Solution WGP, was engineered to create a conformal, stretchable interface that adapts to various skin morphologies. This material demonstrated exceptional mechanical properties, maintaining electrical functionality at up to ~1200% strain when supported strain while achieving a 93.18% reduction in electrode-skin impedance compared to commercial electrodes. Comprehensive characterization studies revealed InSkin's superior performance across different skin types. The device maintained 80.65% of its signal amplitude on wrinkled skin compared to smooth skin and 100% on hairy skin compared to shaved skin. Long-term stability tests showed 75% signal quality retention after 24 hours of continuous wear. High-density surface electromyography (sEMG) mapping capabilities were demonstrated using a 32-channel array with 12 mm inter-electrode spacing. This enabled detailed visualization of muscle activity patterns, including motor unit action potential propagation and innervation zone identification, showcasing potential applications in neuromuscular research and personalized rehabilitation. Advanced gesture recognition algorithms integrated with the InSkin platform achieved 97.7%accuracy in classifying ten hand gestures, significantly outperforming commercial electrodes. This performance was consistent across age groups, with only a 4% reduction in accuracy for older participants. The system's efficacy was further validated through successful integration with a prosthetic hand prototype, demonstrating the potential for intuitive, high-precision control.
Department: Biomedical Engineering
Name: Evran Ural
Date Time: Thursday, August 22nd, 2024 - 9:30 a.m.
Advisor: Chris Contag
Many conditions of chronic inflammation, such as ulcerative colitis, predispose an individual to developing cancer. The predisposition of chronically inflamed tissue to neoplasia and malignancy is referred to as immunocarcinogenesis. Colitis is characterized by relapsing episodes of inflammation and ulceration in the colonic mucosa. Macrophages play an important role in regulating the immune response in colitis, and secrete proinflammatory factors that may promote colitis-associated cancer. Extracellular vesicles (EVs) have been shown to mediate colitis and colon cancer progression, and there is accumulating evidence suggesting that the activation states of macrophages influence EV secretion and signaling effects in inflammation and cancer. Macrophages in the ulcerated colonic submucosa are exposed to increased levels of bacterial endotoxins, so we sought to model EVs from colitis in culture using EVs from lipopolysaccharide (LPS)-activated macrophages. To investigate the impact of EVs from macrophages on mediating colitis-associated cancer, we characterized EVs from LPS-activated macrophages, treated colon cells and tumors with isolated macrophage EVs, and analyzed the inflammatory and protumorigenic effects in vitro and in vivo. Our results provide evidence that EVs released from LPS-activated macrophages increase inflammation in the colonic epithelium, can promote cell growth, lead to anchorage-independent growth, and induce protumorigenic protein expression in transformed cells, and significantly alter the local immune environment. These findings have implications on the origins and progression of colitis-associated malignancy.
Department: Biomedical Engineering
Name: Daniel Marri
Date Time: Tuesday, May 14th, 2024 - 1:00 p.m.
Advisor: Prof. Sudin Bhattacharya
Circadian clocks are intrinsic molecular oscillators present in cells across prokaryotes and eukaryotes that synchronize physiological processes with external cues, enabling organismal adaptation and survival. These clocks regulate crucial biological functions, including sleep-wake cycles, thermoregulation, hepatic metabolism, and hormonal secretion, through the rhythmic expression of clock-controlled genes. Perturbations in the circadian clock network can contribute to the pathogenesis of various disorders, such as obesity, diabetes, inflammatory conditions, and certain cancers. To understand the effect of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) on the spatial and temporal dynamics of the circadian clock genes, interpretable machine learning models were developed to predict BMAL1 binding to DNA in liver, kidney, and heart tissues using genetic and epigenetic features (binding sequence, DNA shape, and histone modifications). Furthermore, a spatiotemporal multicellular mathematical model of the mammalian circadian clock in the liver lobule was developed to investigate intercellular coupling for the synchronization of circadian clock expression across the portal-to-central axis. Lastly, to understand the interplay between the spatial and temporal axes of gene expression in the liver, particularly in drug metabolism pathways, non-linear mixed effect models were developed to analyze the acute effect of TCDD on the spatial temporal expression of genes in the hepatic lobule.
These findings provide a comprehensive examination of circadian rhythms and their disruption by TCDD in the liver, encompassing molecular mechanisms, predictive modeling, and spatiotemporal dynamics. Also, the study offers valuable insights into the intricate regulatory mechanisms governing circadian rhythms, the significance of zonation in hepatic functions, and the interplay between spatial and temporal gene expression. Taken together, our findings have the potential to contribute significantly to our understanding of circadian resilience and the mitigation of pathological conditions, particularly in the context of drug metabolism pathways and hepatic function.
Department: Biomedical Engineering
Name: Logan Soule
Date Time: Monday, April 8th, 2024 - 9:00 a.m.
Advisor: Prof. Dana Spence
Red blood cell (RBC) transfusions are life-saving procedures for a wide variety of patient populations, resulting in nearly 30,000 transfusions each day within the United Sates. However, transfusions can also result in complications for patients, including inflammation, edema, infection, and organ dysfunction. These poor transfusion outcomes may be related to irreversible chemical and physical damages that occur to RBCs during storage, called the “storage lesion”. These damages, including diminished ATP production/release, decreased deformability, increased oxidative stress, and increased membrane damage, may result in poor functionality when transfused. The damage that occurs during storage may be due to the hyperglycemic nature of current anticoagulants and additive solutions used for RBC storage. All FDA approved storage solutions contain glucose at concentrations that are over 8x higher than the blood stream of a healthy individual. Previous work has already shown that storing RBCs at physiological concentrations of glucose (4-6 mM), or normoglycemic conditions, resulted in the alleviation of many storage-induced damages, including an increase in ATP release, increased deformability, reduced osmotic fragility, and decreased oxidative stress. However, this storage technique was also accompanied by many limitations in its translation to clinical practice. The manual feeding of glucose to normoglycemic stored RBCs to maintain physiological levels of glucose introduced both a breach in sterility and unreasonable labor requirements that could not be translated to clinical practice. Additionally, the low-volume storage (< 2 mL) method with custom PVC bags used in previous work may not illicit similar benefits when scaled up to larger volumes with commercially available blood collection bags.
This work overcame these limitations through the design and implementation of an autonomous glucose delivery system that maintained normoglycemia of stored RBCs completely autonomously for 39 days in storage, while also maintaining sterility. This system was then used to store RBCs under normoglycemic conditions and monitor key storage lesion indicators, resulting in reduced osmotic fragility, decreased oxidative stress, and reduced morphological changes. There was also no impact on glycolytic activity or hemolysis levels, improving upon previous work which reported significant hemolysis that surpassed the FDA threshold of 1%. These data solidify and improve upon previous results, indicating that normoglycemic RBC storage results in reduced damages in storage that may translate to better in vivo function. The autonomous glucose delivery system also significantly advances the applicability of the normoglycemic storage technique to clinical practice, making large scale studies now possible. Additionally, a novel rejuvenation therapy was investigated, highlighting the capability of albumin, an abundant plasma protein, to reverse the membrane damages seen during RBC storage, resulting in RBCs closer in shape and size to that of fresh RBCs.
Department: Biomedical Engineering
Name: Meghan Hill
Date Time: Wednesday, March 6th, 2024 - 12:00 p.m.
Advisor: Taeho Kim
Glioblastoma is one of the most aggressive and invasive types of cancer. Unfortunately, due to the overlapping nature of side-effects with other types of neurological diseases and the difficulty to identify them with diagnostic measures, it is not discovered until stage four. At this point, patients have limited options for care and ultimately end up in palliative care not long after diagnosis. The blood-brain barrier (BBB) has proved to be a difficult boundary for current modern medicines as it prevents adequate accumulation within the brain. As gliomas often form in inoperable parts of the brain, conventional FDA-approved therapies prove to be ineffective. Within the past ten years, targeting strategies using RGD peptides have proven effective at transporting drugs, contrast agents, or nanoparticle delivery vehicles across the barrier, but suffer from off-targeting effects due to expression of the peptide-recognizing integrins on the surface of healthy cells. Extracellular vesicles, particularly exosomes, have shown promising specific targeting effects of cells from which the vesicles originate. They have also shown a remarkable ability to pass through the BBB innately. The focus of this project was the development of a glioblastoma derived-exosome coated Prussian Blue nanoparticle (Exo:PB) that could easily accumulate within glioblastoma tissues and provide enhanced diagnostics as well as localized therapy. Prussian Blue nanoparticles are FDA-approved for scavenging heavy metals present within the body after extreme radiation exposure. Based on their exceptional application to photothermal therapy and ability to be used for photoacoustic imaging and MRI, they are an ideal candidate for glioblastoma theranostics. By investigating the distribution and accumulation patterns of these newly developed Exo:PB nanoparticles within preclinical mouse models, earlier diagnosis and treatment intervention can be achieved for glioblastoma.
Department: Chemical Engineering and Materials Science
Name: Ashiq Shawon
Date Time: Monday, November 25th, 2024 - 11:00 a.m.
Advisor: Dr. Alexandra Zevalkink
The crystal structure and bonding characteristics of intermetallic compounds critically influence their thermal and elastic properties. Polymorphic phase transitions – where crystal structures transform without altering atomic composition – offer a unique window to directly probe the relationship between atomic arrangement and thermal properties. Intermetallic Zintl compounds provide an intriguing case study, as they exhibit both ionic and covalent bonding frameworks within a single crystal lattice. Within the AMX Zintl family (where A = alkali-metal or alkaline earth metal, M = transition metal, X = non-metal), a series of closely related crystal structures feature a covalent sublattice that transitions progressively from a two-dimensional (2D), graphene-like configuration to a fully interconnected three-dimensional (3D) network. By examining these crystallographic transitions, we uncovered concrete correlations between the dimensionality of covalent bonding and the resulting thermal properties.
We first explored the changes in thermal transport properties in the compound YbCuBi, as its crystal structure transitions from a flat 2D covalent sublattice to a buckled quasi-2D covalent network with periodic interlayer interactions. Using a combination of resonance ultrasound spectroscopy, inelastic neutron scattering, and first-principles calculations, we studied the impacts of this crystallographic transition on acoustic and optical phonons. Thermal conductivity measurements elucidated how changes in phonon energies and elastic behavior impact the thermal transport characteristics. Building on this, we also investigated a quasi-2D to 3D covalent phase transition in the CaAgSb1-xBix solid solution. Isoelectronic substitution of Sb by Bi systematically alters the elastic properties, while the crystallographic transition induces a ‘step-like’ change. Lattice parameters from x-ray diffraction reveal the underlying mechanism of the phase transition, while resonance ultrasound spectroscopy elucidates its impact on elastic properties. We also explore the limitations of Weidemann-Franz approximation for heat transport by charge carriers, highlighting the boundaries in our current understanding of thermal transport by quasiparticles.
In the process, we discover promising thermoelectric properties in CaAgSb, attributed to its low thermal conductivity and high electronic mobility. According to the single parabolic band model, reducing carrier concentration could potentially enhance the thermoelectric performance of CaAgSb. Therefore, we conduct a ‘phase-boundary mapping’ study, combining first-principles density functional theory calculations and experiments to elucidate the behavior of carrier-generating defects under different growth conditions. We identify Ag vacancies as the defects with the lowest formation energy under all growth conditions, limiting Fermi level tuning to a narrow window, and suggesting that other routes may be needed to optimize the thermoelectric efficiency of CaAgSb.
Department: Chemical Engineering and Materials Science
Name: Demetrios A. Tzelepis
Date Time: Tuesday, November 19th, 2024 - 12:00 p.m.
Advisor: Dr. Lawrence Drzal
Lightweighting of automotives and ground vehicles has facilitated the use of a wide range of materials such as high strength aluminum, advanced high strength steel alloys, ceramics along with various composites based on the desired application. This inevitably leads to multi-material joints, where fusion welding process is not possible. Adhesive bonding offers an alternative to fusion welding for mixed or multi-material joints. In military ground vehicle applications, these types of multi material joints not only undergo quasistatic and fatigue loading but also high strain rate events such as a mine blast and ballistic penetration. Adhesives that exceed 10.0 MPa in shear strength and a displacement failure greater than 3.81 mm are classified as ‘Group-1 adhesives’ as they exhibit excellent stiffness-toughness balance which are need for high strain rate applications.
Polyurethanes (PU), polyureas (PUa) and their intermediates poly(urethane-ureas) (PUU) represent an industrially important and versatile class of polymers used in coatings, sealants, and adhesive applications. In terms of defense industrial applications, PUa’s have been used is explosion (blast) resistant coatings that can suppress the rupture of thick steel plates or the spallation of masonry structures by dissipating shock wave energy. The reason for their versatility comes from their structure and morphology which are comprised of hard segments and soft segments. Depending on hard segment content and soft segment length PU and PUa can range from a hard and brittle (high hard-segment content) to soft and elastomeric (low hard-segment content). In other words, they can be tailored to have a balance of stiffness and toughness and may be a good choice for adhesive applications. In PU and PUa the hard and soft segments can separate and form a percolated hard phase in a soft phase matrix. Additions of nano particles such as graphene nanoplatelets (GnP) to PU and PUa give an additional microstructural dimension to PUa and PUa. The effect of adding GnP to the adhesive, quasistatic fracture, and viscoelastic properties of PUa is not fully understood and is a necessary first step in tailoring PUa formulation for adhesive application and understanding their high strain properties.
In this work a multidisciplinary approach (experimental and modeling) is used to elucidate the effect of the GnP on the processing (chemistry), structure (phase separation), and properties (quasi-static, and viscoelastic) relationship of PUa based nanocomposite. A model polyurea with hard segment weight fraction (HSWF) of 20, 30 and 40 percent was developed to explore the combined effect of HSWF and nano-additions of 0.5, 1.0 and 1.5 weight percent GnP on the quasi-static and viscoelastic properties. For model PUa formulation with higher HSWF the additions of GnP on quasi static tensile and viscoelastic properties were negligible but at lower HSWF some improvement was seen in the viscoelastic properties and simultaneously with improvement in strength and ductility was seen. Despite the complexity of the phase separated microstructure of the PUa and the nanocomposite, time-temperature (TTS) super position was shown to be valid for both the neat PUa’s and the PUa-GnP nano composite. Although the TTS shifts didn’t fit Arrhenius nor the WLF models, they did fit a more recently developed two-state, two-(time) scale model. Furthermore, a micro mechanical model utilizing fractional calculus-based modeling showed excellent correlation between the experimentally obtained TTS curves and the mechanical modeling for both neat and composite PUa. The micromechanical model developed utilizes a few physical properties such as modulus and relaxation time to predict material viscoelastic behavior instead of the conventional Prony series which has a large number of parameters with no relation to material properties. The micro-mechanical model parameters were evaluated at various nano-loading and hard segment weight fraction which showed that the effect of GnP was significantly less pronounced than the effect of HSWF.
In addition, Single Lap Joints were used for an initial exploration of multiple formulations from both a chemistry perspective (changing isocyanate type and diamine type) and a microstructural perspective (weight fraction of hard segments). Results indicate that Group-I adhesive with cohesive failures can be achieved with PUa, showcasing the potential of PU as an adhesive.
Overall, this work supports the feasibility of utilizing PUa’s in adhesive applications. The detailed characterization of PUas with varying HSWF and GnP content shows that the HSWF had a far greater effect on the properties of PUa than the additions of GnP. GnP did not have adverse or detrimental effects on the performance of the PUa. Future work can explore the advantage of GnP in creating multifunctionality to PUa such enhancing thermal and electrical conductivity. At the same time, GnP showed significant improvement in PUas with low HSWF creating a wide range of potential applications for PUa based bonded joints. Future work should also explore the high strain rate behavior of the PUa bonded joints.
Department: Chemical Engineering and Materials Science
Name: Shalin Patil
Date Time: Tuesday, November 12th, 2024 - 1:00 p.m.
Advisor: Dr. Shiwang Cheng
Hydrogen bonding (H-bonding) is omnipresent such as in DNA, RNA, proteins, and water. The highlighting features of the hydrogen bonding interactions are their directionality and reversibility: the H-bonding has a bond angle between 135o to 180o and they are relatively weak and can break and recombine at experimental time scales. Despite the wide acknowledgment of the directionality and reversibility of H-bonding interactions, their influences on molecular dynamics and macroscopic properties, such as flow or viscosity, have been far from revealed. In this dissertation, we focus on one of the simplest types of H-bonding liquids: Monohydroxy Alcohols (MA) to show how the H-bonding interactions affect the supramolecular structures formation, the supramolecular dynamics (including the Debye relaxation process), and the relationship between these supramolecular structures and viscosity. In particular, we have employed a new experimental testing platform, the rheo-dielectric spectroscopy, that reveals: (i) An interesting relationship between the structural relaxation time, t_α, and the Debye time, t_D, with t_D^2/t_α following an Arrhenius temperature dependence; (ii) The presence of an intermediate relaxation process with characteristic time, t_m, between t_α, and t_D of MAs that is both dielectric and rheology active; (iii) t_m agrees excellently with hydrogen bonding exchange time of MAs from NMR measurements. These observations inspire new theoretical development, i.e. the living polymer model (LPM) (Figure 1), which enables a coherent explanation for a wide range of molecular parameters on the supramolecular structures and dynamics of monohydroxy alcohols, including the roles of molecular architecture, the alcohol types, and the dilution. The results have helped clarify several concepts in the current understanding of the dynamics of H-bonding liquids, such as the supramolecular chain breakup time, the average supramolecular chain size, and the H-bonding lifetime of MAs.
Department: Chemical Engineering and Materials Science
Name: Sabrina J. Curley
Date Time: Tuesday, November 5th, 2024 - 11:30 a.m.
Advisor: Dr. Caroline R. Szczepanski
Surfaces are how bulk materials interact with the world, and in nature, how organisms interact with their environment. As such, multiple approaches in water-securing research take inspiration from unique water-surface interactions observed in numerous plants and animals that address scarcity. Traditional strategies used for surface formation (e.g. photolithography, block copolymer assembly, additive manufacturing, and machining) have certain limitations, including the need for multiple processing steps or specialized equipment, patterned length scale restrictions, as well as requirements for niche chemical precursors. These limitations have associated costs in terms of time, energy, and resources, and also often result in excess waste generation. Compared to these traditional methods, photopolymerization induced phase separation (PIPS) offers many advantages as it can be employed at ambient conditions and utilize commercially available chemicals, forming features at multiple length scales in a single UV cure step via reaction-driven topography with no photomasks and minimal waste generation. Here, the Namib Desert beetle is taken as a guide for designing surfaces with PIPS capable of water capture from humid environments. The chemical and physical patterning that arises from PIPS makes it an ideal approach for designing complex, hierarchically-structured surfaces reminiscent of the beetle carapace. To achieve this biomimetic design, surface wrinkling and phase separation behavior during PIPS are studied in conjunction with one another, combining mechanisms often studied in isolation.
Two families of resins were studied for biomimetic coatings via PIPS: (1) an acrylonitrile and 1,6 hexanediol diacrylate comonomer system with poly(methyl methacrylate) additives, and (2) a vinyl acetate and 1,6 hexanediol diacrylate comonomer system with poly(dimethyl siloxane) additives. The inert polymer additives were initially dissolved in the comonomer solutions where, upon photopolymerization, decreased miscibility between these inert additives and the developing polymer network triggered phase separation. Examining the effects of comonomer/polymer selection, crosslink density, UV intensity, and curing environment provide a robust exploration space for investigating the interplay of phase separation, network vitrification, and interfacial energies present in the system. Control over the reaction thermodynamics and kinetics through these experimental variables resulted in heterogeneous polymer morphologies with unique chemical and physical surface patterning. Coatings from the two PIPS resins resulted in surface texturing on both the microscale and macroscale on a singular surface. Specifically, the inert polymer additive enables macroscale wrinkles to simultaneously form via depth-independent internal stresses across phase domains, while simultaneous microscale roughness arises from depth-wise mechanical gradients due to oxygen radical quenching. Chemical patterning is achieved via macroscale phase separation. Domain formation and coalescence is induced by tailoring interfacial energy interactions of the system, forming macroscale regions with differing wettabilities. Introducing materials with contrasting surface energies to form resin-material interfaces during photopolymerization can spatially direct the chemical domains as the system reorients to minimize its surface energy. Using the acrylonitrile and 1,6 hexanediol diacrylate comonomer system with poly(methyl methacrylate) additives, samples faces were produced that had stark contrasts in water contact angles, with a difference of over 50 degrees is observed between the hydrophilic and hydrophobic faces.
To better understand PIPS systems, a systematic approach using Hansen Solubility Parameters (HSP) enabled rapid screening of potential resin formulations. The evolving miscibility interactions between the resin components during photopolymerization (reacting monomer to inert polymer, reacting polymer to inert polymer, and reacting monomer to reacting polymer) were evaluated. Experimental data from the acrylonitrile system was used to benchmark predictions in using this approach to select the comonomer and inert polymer system. This screening afforded by HSP analysis allowed for the design of a vinyl acetate and 1,6 hexanediol diacrylate comonomer system with poly(dimethyl siloxane) additives, minimizing safety hazards while maintaining comparable versatility in chemical and physical patterning. These resins were used to form large scale (100 cm^2) coatings to test for water capture performance. Here, hydrophilic domains were formed through resin-water interfaces introduced at the start of photopolymerization, resulting in circular smooth domains amid roughened hydrophobic domains; patterning similar to the Namib Desert beetle. Hydrophobic PIPS surfaces with wrinkles demonstrated higher volumes of water collection compared to plain glass controls, and surfaces with chemically and physically heterogeneous domains collected the most water. This work aims to showcase the versatility of single step coating design through PIPS to produce complex chemically and physically patterned surfaces using materials that possess minimal hazards while still being commercially and economically viable.
Department: Chemical Engineering and Materials Science
Name: Tanzilur Rahman
Date Time: Tuesday, October 8th, 2024 - 1:00 p.m.
Advisor: Dr. Carl Boehlert
The main hypothesis of this research work is that thermomechanically-processed regions of wrought metals and alloys that undergo similar equivalent plastic strains exhibit similar microstructures and associated characteristics, such as mechanical and physical properties (i.e., properties that are related to the dislocation structures). That is, materials that undergo the same equivalent plastic strain should exhibit the same dislocation structures and dislocation densities, and this should then translate to similar microstructures and associated mechanical properties. It was decided that less than 10% difference would be considered acceptable for verifying this hypothesis. In this work, high-pressure torsion (HPT) was considered as the plastic deformation processing technique to produce the wrought samples. To some extent, the proposed hypothesis has been evaluated indirectly (only for hardness distribution) by the HPT research community. However, the proposed work is novel because no one has directly evaluated this hypothesis using the combined microstructure and hardness methodology proposed in this work.
The equivalent strains, which for HPT processing are a function of the number of turns, the radial distance from the disk center, and the disk height, chosen for the foci of this work were 24, 82,
123, 247, and 371. The following microstructural characterization techniques were used to evaluate this hypothesis: Optical microscopy (OM), scanning electron microscopy (SEM), X-ray diffraction spectroscopy (XRD), atom probe tomography (APT), and transmission electron microscopy (TEM). Vickers and Berkovich microhardness testing were chosen as the mechanical property characterization techniques to evaluate the hypothesis.
The model material used was Zn-3Mg (wt.%), which readily undergoes plastic deformation at room temperature (RT) without a tendency for cracking at plastic strain levels lower than 30%. There were several reasons that this material was chosen. In equilibrium, this alloy exhibits a two-phase microstructure. This material is also not difficult to prepare metallographically and obtain OM and SEM images as well as Vickers indents, and the grain size range is usually between 1-100 microns. This material is susceptible to HPT plastic deformation at normal pressures (6 GPa) and can withstand a relatively large number of turns without cracking (i.e., >30).
In addition to evaluating this hypothesis, another objective of this dissertation work was to compare the microstructure and hardness of powder-processed Zn-3Mg (wt.%) HPT disks with similar disks processed from Zn-3Mg(wt.%) cast alloys as well as hybrids of the same composition. In particular, the different hardness distributions of these multiple-phase materials and their steady-state behavior are discussed.
The overall results could neither verify nor nullify the hypothesis and the reasons for this are described in detail. However, evaluation of the hypothesis helped further understanding of the microstructural evolution that takes place during HPT processing. In addition, the results of the microstructure and mechanical property evaluation will facilitate the development of the next generation of Zn-Mg implants with improved biodegradable and mechanical properties. Overall, this work has enhanced our understanding of the effect of HPT processing on the microstructure and resulting mechanical properties of Zn-3Mg (wt.%), and this understanding can be transferred to better understand such processing on other alloys and alloy systems.
Department: Chemical Engineering and Materials Science
Name: Affan Malik
Date Time: Thursday, September 19th, 2024 - 2:00 p.m.
Advisor: Dr. Hui-Chia Yu
Energy storage technologies are key to a future of less reliance on fossil fuels and cleaner energy. Rechargeable batteries, particularly lithium-ion batteries, have become a mainstay in energy storage, notably in electric vehicles and mobile applications. However, optimizing their performance to achieve faster charging, increased capacity, and higher utilization remains a challenge. Accomplishing these goals requires a microscopic-level understanding of battery electrodes, which is hindered by their complex morphologies. Computer simulations can bridge this gap by providing insights into microstructure phenomena. A framework combining smoothed boundary method (SBM) and adaptive mesh refinement (AMR) is introduced to model and study electrode microstructures. This framework is implemented with finite difference methods (FDM) and parametrized with material properties from literature. We demonstrate the framework's usage and effectiveness with half-cell simulations of LixNi1/3Mn1/3Co1/3O2 (NMC-333) cathode through one-dimensional and three-dimensional simulations on synthetically generated microstructures. A crucial goal of our work is studying lithium plating on electrodes which is a major obstacle in realizing an electrode's true theoretical capacity and fast charging. Graphite, the predominant anode material in lithium-ion batteries, is particularly prone to lithium plating, especially at fast charging conditions. Thus, modeling graphite is critical to grasp the dynamics of lithium-ion batteries and lithium plating. Graphite anode undergoes phase transformations under lithiation. Incorporating the Cahn-Hilliard phase-field equation into the framework allows for detailed and more accurate simulations of these phase transformations in graphite anodes. Using the developed framework for graphite, we identified overcharging conditions, the influence of particle size, and the importance of pore tortuosity on real reconstructed electrodes. The framework can facilitate the design of thick electrodes, promising higher capacity without experimental construction.
Furthermore, the framework allowed us to examine two different approaches to delay lithium plating in graphite. A thermodynamic approach of hybrid anodes where we mix graphite with hard carbon and a kinetic approach of tunnels where we introduce synthetic channels in the electrode. Through our simulations, we identify that hard carbon particles act as a buffer for lithiation in hybrid anodes, delaying the surface saturation of graphite particles and thus delaying the lithium plating on graphite. On the other hand, creating tunnels generates easier paths for ion diffusion and therefore leads to better utilization of the electrode. Such channels in thick electrodes can generate high-capacity and efficient electrodes. Finally, the development of this framework culminates with a demonstration of full-cell simulations. In summary, simulating electrochemical processes in complex electrode microstructures is streamlined by the presented framework and offers a fast and robust tool for designing and studying microstructures.
Department: Chemical Engineering and Materials Science
Name: Mehrsa Mardikoraem
Date Time: Friday, August 2nd, 2024 - 2:00 p.m.
Advisor: Dr. Daniel Woldring
Proteins are vital in medicine, nanotechnology, and industry. Protein engineering designs these molecules for specific functions like catalyzing reactions or drug delivery. However, designing proteins with desired properties is challenging due to unpredictable mutation effects and complex fitness landscapes. Traditional methods like directed evolution and rational design have limitations in exploring vast sequence spaces and modeling amino acid interactions. Advances in machine learning (ML) and the increasing availability of biological data have shifted protein engineering from theory-driven to data-driven approaches. Despite progress, challenges remain in capturing nuanced protein behaviors, enhancing data quality and diversity, and developing models for complex protein-ligand interactions.
This dissertation integrates ML and computational tools with biological insights for innovative protein engineering. It focuses on designing proteins with desired properties, enhancing their numerical representations, and modeling protein-drug interactions. An ensemble approach combining traditional encodings with protein sequence language models achieved a 94% F1 score in sequence-function predictions. The study also developed a novel pipeline combining AlphaFold, molecular docking, and a heterogeneous graph neural network model (HIPO) for predicting the inhibition of drug transport proteins, crucial for drug metabolism, biodistribution, and implicated with adverse side effects from drug-drug interactions. Advancing beyond protein representation and drug interaction modeling, this work generates new-to-nature proteins with desired properties using generative models and ancestral sequence reconstruction.
By integrating biological insights with advanced ML techniques, this research enhances the capabilities of protein engineering for improved therapeutics and diagnostics.
Department: Chemical Engineering and Materials Science
Name: Christopher Herrera
Date Time: Monday, April 23rd, 2024 - 10:30 a.m.
Advisor: Dr. Richard Lunt
Interest in photovoltaics (PV) is steadily increasing with the development of building-integrated photovoltaics (BIPV). To accelerate BIPV integration, transparent PVs (TPV) have emerged to enable deployment over vision glass where visible transparency and power conversion efficiency (PCE) are equally important. Transparent luminescent solar concentrators (TLSCs) offer a promising approach to achieving high visible transparency due to a simpler module structure in the incident light path. By selectively harvesting ultraviolet (UV) and near-infrared (NIR) wavelengths, TPVs and TLSCs have a theoretical PCE limit of 20.6% for human vision. To date, TLSCs have only reported moderate PCE values with often poor or unreported operational lifetimes. This thesis focuses on modification of various luminophore classes (organic molecules, organic salts, and metal halide nanocluster salts) to provide routes to improve the performance and lifetime of TLSCs and demonstrate future applications in the agriculture sector.
Organic cyanine salts are popular luminophore candidates in TLSCs due to highly tunable, selective absorption bands with high demonstrated photoluminescent quantum yield (PLQY) in the visible region. However, they commonly suffer from poor photostability and low PLQY in the NIR region. Here, we demonstrate the surprising impact of anion exchange to dramatically enhance the lifetime of cyanine salts in a dilute environment without significantly altering the bandgap or PLQY. This enhancement results in an extrapolated lifetime increase from 10s of hours to over 65,000 hours under illumination. Using a combination of experiment and DFT computation, we demonstrate that lower absolute cation-anion binding energies generally lead to greater photostability. We then used this model to predict the stability of other anions.
Next, a class of donor-acceptor-donor (DAD) molecules are investigated to begin understanding the relationship between chemical structure and PLQY. Within this DAD class, we demonstrate a dramatic correlation between solvent environment and DAD PLQY, resulting in dramatic enhancements in PLQY with values close to 1.0. We fabricate LSCs using these DADs to report the highest single-component device performance to date.
Metal halide nanoclusters, which are precisely defined in their chemical structure, have recently been shown by our group to be a promising UV-absorbing luminophore. By changing transition metal from Mo (group 6) to Ta or Nb (group 5), the bandgap and absorption bands shift dramatically with distinct transitions present in the NIR, making them of even greater interest for TPVs and TLSCs. We explore the photophysical properties of these new compounds, contrasting them with the Mo-based clusters, and discuss pathways for TPV and TLSC integration.
Finally, we demonstrate the first plant-transparent PVs highly suitable for agricultural applications. This will initiate a new field of “transparent agrivoltaics” where the tradeoff between plant yield and power production can effectively be eliminated. We first studied the effects of varying light intensity and wavelength-selective cutoffs on commercially important crops (basil, petunia, and tomato). Despite the differences in TPV harvester absorption spectra, photon transmission of photosynthetically active radiation (PAR; 400-700 nm) is the most dominant predictor of crop yield and quality, indicating that the blue, green, and red wavebands are all essentially equally important to these plants. When the average photosynthetic daily light integral exceeds ~12 mol·m-2·d-1, basil and petunia yield and quality are acceptable for commercial production. However, even modest decreases in TPV transmission of PAR reduce tomato growth and fruit yield. The results identify the necessity to maximize transmission of PAR to create the most broadly applicable TPV agrivoltaic panels for diverse crops and geographic locations. We determine that the deployment of 10% PCE, plant-optimized TPVs over approximately 10% of total agricultural and pastureland in the U.S. would generate 7TW, nearly double the entire energy demand of the U.S.
Department: Chemical Engineering and Materials Science
Name: Chase Bruggerman
Date Time: Wednesday, April 10th, 2024 - 9:00 a.m.
Advisor: David Hickey
About 15% of enzymes rely on the cofactor nicotinamide adenine dinucleotide (phosphate) (NAD(P)+). The cofactor has a redox-active nicotinamide site, which can undergo a reversible two-electron-one-proton reduction to form NAD(P)H. The ability to control reactions involving NAD(P)H is a potential market opportunity, enabling the transformation of biological feedstocks with high safety (near room temperature) and selectivity (both regio- and stereoselectivity). However, the cost of NAD(P)+ – tens to hundreds of thousands of dollars per mole – is prohibitively high. An appealing way to lower the cost barrier is to regenerate a catalytic amount of NAD(P)H from electrochemical reduction of NAD(P)+; however, the reduction is often intercepted after the first electron transfer to give an enzymatically-inactive dimer. The ability to design systems for regenerable NADH is hindered by a lack of understanding of which structural features correlate with dimerization, and which features correlate with reduction to NAD(P)H. Cofactor mimetics (mNAD+), which retain the redox active nicotinamide site but have variable molecular structures, have been explored as a platform for understanding the structure-function relationships governing the redox behavior of these cofactors.
The purpose of the present thesis is to explore the electrochemistry of mNAD+, to understand which structural features correlate with dimerization, and how systems can be designed to favor reduction to mNADH over mNAD dimer. First, an overview will be presented of the chemistry and electrochemistry of NAD+ and mNAD+, with a special emphasis on methods of quantifying dimerization rates. The next part of the presentation explores the effect of both the molecular structure and the counterion of mNAD+ on the dimerization rate, using alternating current voltammetry. It is shown that dimerization is faster at lower reduction potentials and, counterintuitively, when sterics at the 1-position are larger; the data suggest the reduction of mNAD+X- ion pairs rather than lone mNAD+ ions. The second half of the talk will explore conditions that favor the reduction of mNAD+ to mNADH, and it is shown that sodium pyruvate favors the reduction of mNAD+ to a product that is electrochemically indistinguishable from mNADH. Evidence is provided in support of an interaction between an mNAD radical and a pyruvate radical, with mNAD increasing the rate of electron transfer to pyruvate. Finally, the impact of pyruvate on product distribution of mNAD+ is explored with bulk electrolysis experiments.
Department: Chemical Engineering and Materials Science
Name: Lincoln Mtemeri
Date Time: Thursday, April 4th, 2024 - 1:00 p.m.
Advisor: Dr. David P. Hickey
Cell-free bioelectrocatalysis has drawn significant research attention as the world transitions towards sustainable bioenergy sources. This technology utilizes electrodes to drive challenging enzymatic redox reactions, such as CO2 reduction and selective oxidation of lignin biomass. At these bioelectrochemical interfaces, enzymes are rarely capable of direct exchange of electrons with the electrode surface because many redox enzymes harbor cofactors that are buried within protein matrices that acts as an electrical insulator. In such cases, electrochemically active small molecules, called redox mediators, have proven effective in enabling efficient electron transfer by acting as electron shuttles between the electrode and enzyme cofactor. However, the task of selecting suitable redox mediators remains challenging due to lack of a comprehensive design criteria. Presently, their design relies on a trial-and-error approach that emphasizes redox potential as the only parameter while overlooking the significance of other structural features. It is crucial to acknowledge that while the redox potential of the mediator serves as a thermodynamic descriptor, it falls short in fully describing the kinetic behavior of redox mediators. In this seminar, I present our efforts in developing strategies for designing and understanding the behavior of redox species using quinone-mediated glucose oxidation by glucose oxidase as a model system.
This seminar will begin by describing the application of parameterized modeling – specifically, supervised machine learning – to identify which structural components of quinone redox mediators correlate to enhanced reactivity with a model enzyme, glucose oxidase (GOx). Through this analysis, we identified redox potential and mediator area (or molecular size) as crucial chemical parameters to optimize when designing mediators. We further explored the role of the steric parameter (i.e. redox mediator projected area) when accessing GOx via its active site tunnel. Using two complementary computational techniques, steered molecular dynamics and umbrella sampling, a rate-limiting step was identified from a series of elementary steps. Specifically, we determined that the transport of redox species in the protein tunnel constitutes the rate-limiting step in the overall process.
Utilizing molecular docking and molecular dynamics simulations, we examined a specific quinone-functionalized polymer with the goal of determining why it exhibits activity with glucose dehydrogenase (FAD-GDH) but not with GOx, despite both structurally similar enzymes exhibiting activity to the corresponding freely diffusing mediator. Docking simulations coupled with MD refinement reveal that the active site of GOx is inaccessible to the polymer-bound redox mediator due to the added steric bulk; this is in contrast to FAD-GDH which has a wider molecular tunnel to its active site.
Although, these strategies for redox mediator design and engineering were developed using GOx as a model system, a similar approach holds promise for designing systems involving other redox mediators. This work demonstrates that this technique of employing parameterized modeling in designing mediators has the potential to be applied in other bioelectrocatalytic platforms. Moreover, the computational simulations can effectively address fundamental questions where continuum models are inadequate. This integrated effort brings us closer to design of next-generation effective bioelectrodes for mediated bioelectrocatalysis.
Department: Civil and Environmental Engineering
Name: Francis Hanna
Date Time: Tuesday, November 5th, 2024 - 3:00 p.m.
Advisor: Dr. Annick Anctil
As the clean energy transition unfolds, the use of renewable energy and electric vehicles (EV) has increased rapidly over the past decade and is expected to grow further. Solar and battery demands are expected to reach 29 PWh and 13 PWh by 2050, respectively. The clean energy transition is vital to meet climate goals, but is met with challenges such as future battery waste generation, and the availability and environmental footprint of energy materials.
Cadmium-telluride (CdTe) is one of the world's leading thin-film photovoltaic (PV) technologies. CdTe PV relies on tellurium, a scarce metal mainly recovered as a by-product from copper electrorefining anode slimes. Several studies investigated the availability of tellurium and used life cycle assessment (LCA) to evaluate its environmental impact. However, previous availability studies are static and do not reflect tellurium supply, demand, and price interconnection. Previous LCA studies do not reflect the industrial best practices for tellurium recovery. This study develops a system dynamics model to assess the tellurium availability between 2023 and 2050 under different demand scenarios. All demand scenarios exhibit a tellurium supply gap. The results show that recycling retired solar panels and improving tellurium yield from copper electrorefining are efficient mitigation approaches. An LCA is also conducted to evaluate the environmental impact of tellurium recovery from copper electrorefining based on different production methods and locations. The environmental impact of tellurium varies by production location and method. Tellurium recovery in the USA via pyro-hydrometallurgical treatment of anode slimes reduces the freshwater toxicity and resource depletion of CdTe semiconductors by 44% and 42%, respectively, compared to the worst-case scenario. The results show that previous studies underestimates the environmental impact of tellurium and, as a result, underestimates the freshwater toxicity and abiotic depletion potential of CdTe solar panels by 35% and 50%, respectively.
The environmental impact of batteries depends on the source of virgin materials, and the recycled materials content and recovery method. Recycling helps manage future battery waste while providing a domestic supply source. But the environmental impact of recycled materials remains unclear. A comprehensive assessment of the environmental impact of conventional and new recycling methods is needed. The environmental impact of batteries also depends on the production location, the energy source, and the final battery chemistry. In this dissertation, a configurable LCA tool is developed to assess the environmental impact of batteries for different supply chain scenarios. This tool is first used to evaluate and compare three LIB recycling methods: 1) conventional hydrometallurgy (CHR), 2) truncated hydrometallurgy (THR), and 3) pyrometallurgy (PR). The same tool is used to evaluate the effect of recycled content on new batteries. Finally, multiple scenarios are evaluated to assess the environmental effect of reshoring the battery supply chain to the US. The results show that THR reduces the carbon footprint, water consumption, freshwater toxicity, and resource depletion potential of new batteries by 87%, 72%, 50%, and 36%, respectively, compared to CHR and PR. The effect of recycled materials on the environmental impact of new batteries varies by impact category and depends on the recycling method and the source of primary materials being replaced. In a best-case scenario, 100% recycled content can reduce LIB cells' carbon footprint and freshwater toxicity by 50% and 61%, respectively. However, water consumption and scarcity footprint improve only when high-impact virgin materials are replaced with recycled materials recovered via pyrometallurgy. Further analysis shows that offshoring the battery supply chain leads to the highest battery cell environmental footprint. Alternatively, batteries produced in Canada have the lowest impact, driven mainly by a cleaner electricity grid and source of primary materials. The environmental impact of 100% US-made batteries largely depends on the source of primary materials, specifically lithium and nickel. Increasing renewable energy contribution to 1.75 kWh/kWh cell produced can alleviate the high environmental impact of domestic nickel and lithium and reduce the environmental footprint of 100% US-made batteries.
Department: Civil and Environmental Engineering
Name: Hamad Bin Muslim
Date Time: Tuesday, October 29th, 2024 - 1:00 p.m.
Advisor: Dr. Syed Waqar Haider
Hot-mix asphalt (HMA) compaction at longitudinal joints is critical for pavement performance and longevity. Many highway agencies face challenges maintaining deteriorated joints, often resulting in issues like raveling along the centerline. Despite extensive research and training on proper HMA placement and compaction, joint deterioration remains a leading cause of premature flexible pavement failure. Improving joint compaction during construction is critical to better pavement performance. The longitudinal joint construction includes various methods— differing laying conditions, joint geometry, rolling patterns, and techniques. While each has advantages, these methods also carry risks in consistently achieving optimal compaction. Current quality assurance (QA) methods, such as coring and density gauges, are labor-intensive, time[1]consuming, costly, and offer limited coverage, increasing the likelihood of missing low-density areas. The variability in construction methods and limitations of traditional QA testing raises the risk of inadequate joint compaction, potentially compromising pavement's durability and performance.
The Dielectric Profiling System (DPS) offers a nondestructive alternative for assessing compaction quality, providing continuous real-time coverage by measuring dielectric values, which correlate with HMA density but need a calibrated relationship. Adopting DPS for QA testing requires alternative methods (other than air voids) to quickly assess joint density during construction. This study compared various longitudinal joint construction methods using dielectric measurements from Minnesota and Michigan road projects. The continuous dielectric data were discretized into subsections for analyses using relative dielectric differences that indicated over 2% more air voids at the joint than at the mat.
This study used a coreless calibration method with lab-prepared pucks to develop a new model for converting dielectric values to predicted air voids for similar analyses. Project- and group-wise calibrations were performed; project-specific models aligned well with cores collected during DPS and QA testing. Minor HMA production fluctuations across different days displayed minimal impact on air void predictions. Additionally, HMA mixtures were grouped for group-wise calibrations using recorded dielectric values and mix characteristics, which demonstrated reasonable accuracy. This approach highlights the potential for direct DPS data use in the field without needing project-specific models.
Statistical analyses revealed that unconfined joints had the highest air void content, with 50 to 100% of subsections showing significant differences, indicating over 2% more air voids than the adjacent mat. Additionally, 60 to 100% of unconfined joint subsections fell below the 60% Percent Within Limits (PWL), the rejectable quality level (RQL). In contrast, all other joint types showed similar compaction to the mat, with negligible subsections below 60% PWL. These findings were consistent when using predicted air voids. Similarly, the probabilistic analysis showed a 30 to 60% likelihood that unconfined joints had significantly lower dielectric values than the mat, while other joints exhibited minimal differences or better compaction.
This study introduces a Longitudinal Joint Quality Index (LJQI) that enables the direct use of dielectric values to enhance the field applicability of DPS. A threshold of 70% LJQI was established for joint quality acceptance. LJQI comparisons revealed that unconfined joints had higher void content than the adjacent mat in 11 to 89% of stations across multiple projects. According to all the analyses conducted, it was consistently found that constructing either butt or tapered joints while avoiding unconfined joint construction can lead to achieving better joint density. Moreover, it has been observed that smaller subsections are efficient in identifying local compaction problems, and for practical reasons, it is suggested to use 100 ft subsections during analyses.
Many State Highway Agencies (SHAs) rely on specifications that focus on as-constructed air voids to assess construction quality and determine pay factors (PF) for contractor payments, often neglecting the performance of longitudinal joints. This study proposes a Performance-Related Specification (PRS) framework that leverages the DPS's continuous data to link joint service life to void content, used as the Acceptance Quality Characteristic (AQC). By using air void content as AQC and PWL quality measure, SHAs can more accurately assess joint quality and make informed pay adjustments, ensuring durable, high-quality pavements while minimizing overpayments.
Department: Civil and Environmental Engineering
Name: Preet Lal
Date Time: Tuesday, September 10th, 2024 - 12:00 p.m.
Advisor: Narendra Das
Soil moisture is a critical component of the Earth's water cycle, essential for various environmental and agricultural processes, and its significance is further underscored by the impacts of climate change. The change in soil moisture patterns can have profound implications for hydrological dynamics, agricultural productivity, and ecosystem sustainability. To understand these changes, an initial study was conducted to examine the long-term spatiotemporal evolution of soil moisture and its interactions with key hydrometeorological parameters using coarse-resolution data. Over a 40-year period, it was found that approximately 50% of the global vegetated surface layer (0-7 [cm] depth) experienced significant drying. Conversely, only 9% of the global vegetated area showed an upward trend in soil moisture, largely attributed to increasing precipitation levels. While these results provide valuable insights into broad-scale soil moisture trends and their primary drivers, and highlight the limitations of coarse-resolution data, which fail to capture the finer-scale processes and anthropogenic influences that are critical for understanding micro-scale feedback mechanisms.
However, the retrieval of high-resolution soil moisture products at a global scale can be achieved in this “Golden Age of SAR”. Among the upcoming L-band SAR missions, NISAR is in the final stages of preparation for launch. Therefore, taking advantage of the upcoming NISAR mission, an algorithm for high-resolution soil moisture retrieval is proposed i.e., “multi-scale” soil moisture retrieval algorithm. This algorithm is based on the disaggregation approach which combines the coarse-resolution (9 [km]) soil moisture data with fine-scale co-polarization and cross-polarization backscatter measurements to retrieve high-resolution soil moisture. The algorithm can take input of coarse resolution soil moisture either from satellite radiometer-based or climate model data. In this study, European Center for Medium Weather Range Forecast (ECMWF) ERA5-Land reanalysis data were used as an input coarse resolution soil moisture data. The ECMWF assimilates a large number of satellite and in-situ information to produce overall very reliable datasets. The major advantage of choosing the input dataset from climate model reduces dependency on satellite mission lifetimes. The end goal of the algorithm is to remove dependencies on any complex modeling, tedious retrieval steps, or multiple ancillary data needs, and subsequently decrease the degrees of freedom to achieve optimal accuracy in soil moisture retrievals. The proposed algorithm targets a spatial resolution of 200 [m], a specific spatial resolution determined based on the user requirements. However, currently due to the unavailability of NISAR data, similar L-band data from UAVSAR acquired during the SMAPVEX-12 campaign and ALOS-2 SAR were utilized for algorithm calibration and validation. The algorithm has been initially tested on selected agricultural sites. The retrieved high-resolution soil moisture was validated with in-situ measurements, and the ubRMSE was below 0.06 [m³/m³], meeting the NISAR mission accuracy goals. Additionally, given the SAR's ability to provide fine-resolution backscatter measurements at 10 [m] spatial resolution. The analysis was conducted at spatial resolutions of 100 [m] and 200 [m] across various hydrometeorological settings globally. This includes sites from polar to arid regions and diverse land use. This retrieval and validation were performed using the ALOS-2 L-band SAR time-series data. The retrieved soil moisture at both spatial resolutions showed consistent patterns, with the finer 100 [m] resolution have more detailed information. The validation statistics show that the algorithm consistently maintained an ubRMSE below 0.06 [m³/m³] at both 100 [m] and 200 [m] spatial resolutions. The performance of the algorithm, even in forested regions with dense canopies, presents the robustness of the algorithm. This is attributed to the L-band SAR frequency's higher penetration capability.
However, since these validation statistics are based on limited sites, there is a need to calculate the error in the soil moisture retrieval for each grid to ensure comprehensive accuracy. Recognizing the limitations of in-situ measurements, which are sparse and geographically constrained, an analytical approach to estimate uncertainty in high-resolution soil moisture retrievals for the NISAR mission is also proposed. This approach accounts for errors in the input datasets and algorithm parameters. The approach was applied on the UAVSAR datasets from the SMAPVEX-12 campaign and compared with the ubRMSE for different crop types. The uncertainty estimates closely matches the ubRMSE, demonstrating the robustness of the analytical approach. Overall, this study demonstrates the effectiveness of the proposed algorithm for high-resolution soil moisture retrieval for the NISAR mission and future SAR missions, with the potential to achieve spatial resolutions finer than 100 [m].
Department: Civil and Environmental Engineering
Name: Zheng Li
Date Time: Wednesday, September 4th, 2024 - 1:00 p.m.
Advisor: Dr. Alison Cupples
Microorganisms play important roles in complex and dynamic environments such as agricultural soils and contaminated site sediments. Molecular methods have greatly advanced the understanding of microbial processes, such as nitrogen cycling, carbon cycling and contaminant biodegradation, by providing insights into the structure, function and dynamics of microbial communities.
The first project evaluated the impact of four agricultural management practices (no tillage, conventional tillage, reduced input, biologically based) on the abundance and diversity of microbial communities regulating nitrogen cycling using shotgun sequencing. The relative abundance values, diversity and richness indices, taxonomic classification and genes associated with nitrogen metabolism were examined. The microbial communities involved in nitrogen metabolism are sensitive to varying soil conditions, which in turn, likely has important implications for N2O emissions. This work was conducted virtually during the COVID pandemic.
The second project examined the impact of plant diversity, soil pore size, and incubation time on soil microbial communities in responses to new carbon inputs (glucose). Soil cores from three plant systems (no plants, monoculture switchgrass, and high diversity prairie) were incubated with labeled and unlabeled glucose. The phylotypes responsible for the carbon uptake from glucose were identified using stable isotope probing (SIP). The microbial communities were influenced by plan diversity but not by pore size or incubation time. The differentiated carbon assimilators may be linked to different carbon assimilation strategies (r- vs. K-strategists) depending on pore size.
The third and fourth projects focused on the biodegradation of the common groundwater contaminant, 1,4-dioxane. 1,4-Dioxane was commonly used as a stabilizer in 1,1,1-trichloroethane formulations and is now frequently detected at sites where the chlorinated solvents are present. A major challenge in addressing 1,4-dioxane contamination concerns chemical characteristics that result in migration and persistence. Given the limitations associated with traditional remediation methods, interest has turned to bioremediation to address 1,4-dioxane contamination.
The third project examined the impact of yeast extract and basal salts medium (BSM) on 1,4-dioxane biodegradation rates and the microorganisms involved in carbon uptake from 1,4-dioxane. For this, laboratory sample microcoms and abiotic controls were inoculated with three soils and amended with media (water or BSM and yeast) and 2 mg/L 1,4-dioxane. SIP was then utilized to identify the active phylotypes involved in the 1,4-dioxane biodegradation. The amendment of BSM and yeast enhanced the 1,4-dioxane degradation in all three soil types. Gemmatimonas, unclassified Solirubacteraceae and Solirubrobacter were associated with carbon uptake from 1,4-dioxane and may represent novel degraders. Solirubrobacter and Pseudonocardia were associated with propane monooxygenases genes which potentially function in 1,4-dioxane biodegradation.
The fourth project further explored the impact of yeast extract on 1,4-dioxane degradation at low concentrations (< 500 mg/L) using sediment from three impacted sites and four agricultural soils. 1,4-Dioxane biodegradation trends differed between inocula sources and treatments. For two of the impacted sites, no 1,4-dioxane biodegradation was observed for any treatment, indicating a lack of 1,4-dioxane degraders. In contrast, 1,4-dioxane degradation occurred in all treatments in microcosms inoculated with the agricultural soil or the other impacted site sediments. Bioaugmentation with agricultural soils initiated 1,4-dioxane biodegradation in the sediments with no intrinsic degradation capacities. Overall, yeast extract enhances 1,4-dioxane biodegradation in specific sediments. Bioaugmenting site sediments with agricultural soils may represent a promising approach for the remediation of 1,4-dioxane contaminated sites.
Department: Civil and Environmental Engineering
Name: Xuyang Li
Date Time: Friday, August 23rd, 2024 - 12:00 p.m.
Advisor: Nizar Lajnef
The convergence of artificial intelligence (AI) with engineering and scientific disciplines has catalyzed transformative advancements in both structural health monitoring (SHM) and the modeling of complex physical systems. This dissertation explores the development and application of AI-driven methodologies with a focus on anomaly detection and inverse modeling for domain-specific and other scientific problems.
SHM is vital for the safety and longevity of structures like buildings and bridges. With the growing scale and potential impact of structural failures, there is a dire need for scalable, cost-effective, and passive SHM techniques tailored to each structure without relying on complex baseline models. We introduce Mechanics-Informed Damage Assessment of Structures (MIDAS), which continuously adapts a bespoke baseline model by learning from the structure's undamaged state. Numerical simulations and experiments show that incorporating mechanical characteristics into the autoencoder improves minor damage detection and localization by up to 35% compared to standard autoencoders.
In addition to anomaly detection, we introduced NeuralSI for structural identification, estimating key nonlinear parameters in mechanical components like beams and plates by augmenting partial differential equations (PDEs) with neural networks. Using limited measurement data, NeuralSI is ideal for SHM applications where the exact state of a structure is often unknown. The model can extrapolate to both standard and extreme conditions using identified structural parameters. Compared to data-driven neural networks and other PINNs, NeuralSI reduces interpolation and extrapolation errors in displacement distribution by two orders of magnitude.
Building on this approach, we expanded our focus to broader systems modeled by parameterized PDEs, which are prevalent in various physical, industrial, and social phenomena. These systems often have unknown or unpredictable parameters that traditional methods struggle to estimate due to real-world complexities like multiphysics interactions and limited data. We introduce NeuroPIPE, which estimates unknown field parameters from sparse observations by modeling them as functions of space or state variables using neural networks. Applied to several physical and biomedical problems, NeuroPIPE achieves a 100 times reduction in parameter estimation errors and a 10 times reduction in peak dynamic response errors, greatly enhancing the accuracy and efficiency of complex physics modeling.
Bio: Xuyang Li is a dual Ph.D. candidate in Civil Engineering and Computer Science at Michigan State University, where he is co-advised by Prof. Nizar Lajnef and Prof. Vishnu Boddeti. Li’s research interests lie in leveraging domain knowledge to advance machine learning, particularly in physics-informed machine learning for dynamic system modeling. He has worked on machine learning-based spatial-temporal modeling, anomaly detection, and parameter estimation in various dynamic systems, along with finite element modeling.
Department: Civil and Environmental Engineering
Name: Liang Zhao
Date Time: Tuesday, August 20th, 2024 - 2:00 p.m.
Advisor: Dr. Irene Xagoraraki
In the recent decades we have witnessed numerous outbreaks worldwide, resulting in millions of infections and deaths. Examples include the 1918 H1N1 virus, the 1968 H3N2 virus, the 2003 SARS coronavirus, the 2012 MERS-CoV, and the 2019 SARS-CoV-2. Factors including rapid population growth, escalating climate change crisis, recurring natural disasters, booming immigration and globalization, and concomitant sanitation and wastewater management challenges are anticipated to exacerbate the frequencies of disease outbreaks in the years to come. The traditional disease detection system primarily relies on the diagnostic analysis of specimens collected from infected individuals in clinical settings. This approach has significant limitations in predicting and providing early warnings for impending disease outbreaks. Infected individuals are often tested only after the development of symptoms, and health authorities are usually notified following the inception of a disease surge. Consequently, health authorities respond reactively instead of taking proactive measures during a pandemic. Additionally, clinical data collected by traditional disease surveillance systems often fail to accurately reflect actual infections in communities when asymptomatic infections are dominating, clinical testing is incapable to capture comprehensive infections, limitations in testing supplies and accessibility, and patients’ testing behaviors. Environmental surveillance, especially wastewater surveillance or wastewater-based epidemiology, allows analyses of environmental community composite samples. Municipal wastewater samples are composite biological samples of an entire community that represent a snapshot of the disease burden of the population covered by the corresponding sewer-shed. Collecting and analyzing untreated wastewater samples from centralized wastewater treatment plants and neighborhood manholes for specific viral and bacterial targets at a regular cadence can reveal the trends of pathogen concentrations in wastewater. These trends represent the viral and bacterial loads shed by infected individuals, whether they are symptomatic or asymptomatic. Based on measured wastewater concentrations of disease pathogens and other available datasets such as clinical and demographic datasets, researchers can establish models to predict disease incidences before clinical reporting and develop tools to provide early warnings of upcoming surges of diseases. This crucial information can help public health officials in making informed decisions regarding the implementation of preparedness measures and the allocation of resources. The primary objective of this dissertation is to develop comprehensive laboratorial, technological, and translational methodologies for forecasting viral and bacterial outbreaks through wastewater-based epidemiology.
Bio: Liang Zhao is a fourth-year PhD candidate in environmental engineering at Michigan State University. In his doctoral studies at MSU, Liang has used molecular microbiology laboratory techniques, mathematical tools, statistical and visualization methods to develop pre-emergence systems that enable health departments and practitioners to utilize environmental surveillance to determine early warnings and predict infections of existing and emerging human communicable diseases, including COVID-19, norovirus, RSV, and sexually transmitted infections of Chlamydia and Syphilis. He has closely worked on wastewater surveillance projects with the Michigan Department of Health and Human Services, Great Lakes Water Authority, and local health departments in the City of Detroit, as well as Wayne, Macomb, and Oakland counties.
Department: Civil and Environmental Engineering
Name: Mohammad Wasif Naqvi
Date Time: Wednesday, July 17th, 2024 - 9:00 a.m.
Advisor: Dr. Bora Cetin
Freeze-thaw action in soils, a process where soil moisture freezes and thaws, causes significant heave and settlement, leading to substantial damage to pavements and infrastructure, particularly in seasonally freezing regions. This increases maintenance costs, reduces structural integrity, and shortens roadway and other important infrastructure lifespans. In 2013 alone, U.S. state highway agencies reported spending approximately $27 billion on pavement maintenance, and freeze-thaw damage is considered one of the factors responsible for these expenses. Addressing this issue is essential for infrastructure durability and performance in affected areas, decreasing economic costs and improving safety. This dissertation explores an innovative solution known as engineered water repellency to mitigate the impacts of freeze-thaw cycles on soils. The study also investigates the impact of salt concentrations in soil caused by road deicing operations on freeze-thaw action in soils. An extensive literature review provides a comprehensive understanding of the mechanisms of frost action, its impacts on infrastructure, and existing mitigation strategies.
The research employs both experimental and large-scale testing methodologies to evaluate the efficacy of organosilane (OS) treatments in reducing frost heave and moisture migration in frost-susceptible soils by imparting water repellency to the soil. A novel large-scale soil test box simulates realistic environmental conditions, providing valuable insights into the freeze-thaw action in soil and the practical application of OS treatments. Results from the study demonstrate that OS treatments significantly mitigate frost heave and improve soil stability by reducing moisture migration. Specifically, OS-treated soils showed a reduction in maximum soil heave by up to 96% and water migration by up to 97% compared to untreated soils. The large-scale test box, which provided controlled yet realistic top-down freezing conditions, revealed that treated soils maintained higher minimum temperatures and lower moisture content above the hydrophobic layer thereby reducing the heave monitored at 0.15 m depth. However, the importance of integrating proper drainage systems was highlighted to prevent excessive moisture accumulation and ensure the effectiveness of water-repellency treatments in real-world applications.
The present study also investigates the effects of varying sodium chloride (NaCl) concentrations on freeze-thaw behavior, revealing that higher salt levels effectively lower the freezing point, reduce heave rates, and decrease water intake. The study emphasizes the importance of simulating realistic temperature gradients to understand the effect of salt concentration on freeze-thaw behavior in soils. For instance, soils with 5% NaCl concentration showed significant freezing point depression and reduced heave rates to 11.3 mm/day (ASTM) and 1.5 mm/day (low-temperature gradient) from 22.5 mm/day and 17.2 mm/day, respectively, in the control. Additionally, salt treatments effectively decreased moisture content and water migration, with the highest salt concentration demonstrating the most substantial reductions. However, salt migrates toward the freezing front, increasing soil salt concentrations in the upper layers.
An economic analysis using life cycle cost analysis (LCCA) confirmed that engineered water repellency is a cost-effective long-term solution compared to traditional methods. While initial costs might be higher, the lower equivalent uniform annual costs (EUAC) and net present values (NPV) of OS treatments make them economically viable over the long term. These findings collectively advance the understanding of soil behavior under freeze-thaw conditions and propose practical, economically viable strategies for improving infrastructure resilience in cold climates. Future research should focus on field validations and long-term monitoring to refine these strategies and ensure their effectiveness across diverse environmental conditions.
Department: Civil and Environmental Engineering
Name: Huy Dang
Date Time: Monday, July 15th, 2024 - 2:00 p.m.
Advisor: Dr. Yadu Pokhrel
Dams are some of the most important man-made structures that provide significant benefits to societies by mitigating floods and droughts while supporting irrigation, domestic or industrial water supply, and power generation. However, global attention on the detrimental ramifications of dam operations has increased owing to the observed irreversible environmental impacts of existing dams in over-developed regions. Despite these concerns, the growing demands for energy and water in developing regions have led to a boom in the construction of large dams in recent years with hundreds more planned in the near future. Additionally, the construction and operation of dams in these regions are often based on localized, incomplete, or inconsistent observation-based hydrologic analyses, rendering them less effective in mitigating hazard risks. Simultaneously, climate change is intensifying flood and drought events, making them less predictable and more destructive, especially in developing regions. Thus, there is an urgent need for in-depth investigation of past changes as well as future uncertainties in hydrology of these regions under the compound impact of climate change and dam operations.
This dissertation addresses these critical issues by employing a high-resolution river-floodplain-reservoir model called the CaMa-Flood-Dam (CMFD), that realistically accounts for hydropower and irrigation dam operations. Model simulations are used to quantify the changes in river regime and flood dynamics in the Mekong River Basin (MRB). First, analyses of an important subbasin with unique hydrological features in the MRB, the Tonle Sap, are conducted to provide a comprehensive assessment on the alteration of the Tonle Sap Lake, Southeast Asia largest lake. Then, key insights are presented on the evolving river regime and flood pulse of the entire MRB over 83 years, focusing on the difference between climate and dam impacts on seasonal timing and water balance. Finally, potential changes in river regime and extremes across the MRB under multiple combinations of future climate and planned dam development are explored. The key findings from the aforementioned analyses are: (1) Mekong river flow’s trends and variabilities of are still mainly driven by climate variation, however, dam operations have exerted a growing influence on the Mekong flood pulse especially after 2010; (2) dams are causing a gradual shrinkage of the Tonle Sap lake by reducing its annual inflow from the Mekong mainstream; (3) dams are delaying the Mekong’s wet season onset and shortening its duration; (4) dams have largely altered the Lower Mekong flood occurrence by shifting substantial volume of water between the seasons; and (5) in the future, dams will notably increase dry season flow.
The results in this dissertation provide major advances and important insights on the integrated river-floodplain-reservoir dynamics in the MRB and paving pathways towards a more sustainable development based on the understanding of the continually changing hydrological systems in the region. Furthermore, this assessment could benefit future investigations in other developing regions worldwide where dam construction is similarly booming.
Department: Civil and Environmental Engineering
Name: Celso Santos
Date Time: Wednesday, July 10th, 2024 - 12:00 p.m.
Advisor: Dr. Bora Cetin
The long-term performance of pavement depends on the complex geomechanical properties of the unbound materials used in the construction of the pavement foundation. When the pavement is subjected to cyclic stresses, the stress is transmitted downward through the aggregates that compose the different layers (i.e., base, subbase, and subgrade). The materials’ properties such gradation, density, plasticity index, moisture sensitivity, aggregate shape, stiffness (resilient modulus (MR)), and drainage capacity are crucial qualities that contribute to drainage, stress dissipation and protect the pavement from distresses such as cracking and rutting. For instance, a subgrade layer composed of expansive clay undergoes significant volume changes in response to variations in moisture content. Consequently, it exerts powerful pressures on the pavement structure, leading to uplift during wet periods and settlement during dry periods.
The base and subbase layers protect the subgrade from excessive traffic loads while facilitating pavement drainage. Ideally, natural aggregate is used in the construction of pavement foundations. However, due to the high cost, environmental impact, and scarcity of natural aggregates, recycled concrete aggregate (RCA) have been used as an alternative. The crushed properties of RCA offer superior mechanical benefits, such as high stiffness, compared to natural aggregate (GM). However, the presence of unhydrated cement and cement mortar in RCA can affect the long-term performance of pavement and drainage properties, potentially causing significant distress. While RCA is a stiffer and more sustainable option, its properties are not fully understood. Additionally, there is still a lack of consensus on the effect of geomaterial index properties on the geomechanical properties of both RCA and natural aggregates used in the construction of pavement foundation layers.
To address these issues, several base (RCAs and GMs) and subgrade unbound materials with different index properties were collected from various roadway sections under construction in Michigan. An extensive evaluation was conducted to understand how their index properties affect: 1) the stress-strain response of subgrade (i.e., sand and clay) and base (i.e., RCA and GM) unbound materials; 2) the hydraulic properties (i.e., hydraulic conductivity, water content and matric suction relationship); and 3) the time required to drain 50% of a saturated base layer. The stress-strain response of sandy and fine unbound subgrade soils was evaluated using the NCHRP and Shakedown concepts. Based on their gradation and plasticity index, the materials showed stress-hardening, stress-hardening followed by stress-softening, and stress-softening. Further analysis was conducted to understand the confining pressure and stress dependency of these materials. To study the effect of index properties on RCA and GM, principal component analysis (PCA) was employed for dimensionality reduction and to identify patterns within the dataset. Based on the PCA results, six materials were selected, and a model was developed to estimate laboratory resilient modulus results using falling weight deflectometer (FWD) field tests. Additionally, the hydraulic properties and time-to-drain properties of the base materials were evaluated for further understand the impact of material properties on a base layer performance and their unsaturated properties. The findings led to several recommendations for materials used in designing sustainable and long-life pavement. Detailed discussions of the results are provided in the following chapters.
Department: Civil and Environmental Engineering
Name: Augusto Masiero Gil
Date Time: Tuesday, July 2nd, 2024 - 1:00 p.m.
Advisor: N/A
Fire represents a significant hazard to bridges, often resulting in damage or collapse of structural members. Typically, bridge fires result from crashes or overturns of vehicles carrying large amounts of flammable materials near bridges. These fires have become a growing concern over the last decade due to increasing urbanization and transportation of hazardous materials. Characterized by the rapid onset of very high temperatures (above 1000°C), these fires significantly affect the stability and integrity of structural members. Despite these risks, current bridge codes and standards do not specify any fire safety features in the design and construction of bridges, leaving critical transportation infrastructure vulnerable to fire hazard.
While there has been some research in recent years on the fire response of steel and composite bridges, there have been no studies that addressed the fire problem in concrete bridges. Further, prestressed concrete girders, designed with slender cross-sections to reduce self-weight and span longer distances, can experience faster degradation during fire exposure due to rapid temperature propagation within the girder cross-section. Although conventional concrete members have good fire response properties, newer concrete types such as High-Strength Concrete (HSC) and Ultra-High Performance Concrete (UHPC) experience faster degradation of mechanical properties at elevated temperatures and are also more susceptible to fire-induced spalling.
To address some of the identified knowledge gaps, experimental and numerical studies on the fire response of concrete bridge girders have been carried out. As part of the experimental work, pore pressure measurements in concrete at elevated temperatures were conducted to evaluate the mechanisms that lead to fire-induced spalling in concrete. Also, shear strength tests were carried out to assess the degradation of shear strength with temperature in UHPC. Complementing the experimental studies, a comprehensive finite element-based numerical model was developed to trace the response of concrete bridge girders under fire conditions. The model accounts for varying fire scenarios, loading conditions, and temperature-dependent thermal and mechanical properties of steel and concrete, and was validated with data from fire tests. To develop typical bridge fire scenarios, fire dynamics simulations were carried out and incorporated into the model.
A set of parametric studies were undertaken to evaluate the effect of critical parameters on the fire response of concrete bridge girders. Results demonstrate that smaller concrete sections present lower fire resistance due to their lower thermal mass, and that I-shaped concrete girders are susceptible to shear failure from the high temperatures in their webs. Other design parameters, such as span length and concrete strength, also significantly affect the fire performance of concrete bridges. In addition, fire simulations have shown that bridge fires present high severity and are influenced by the bridge geometrical features. Based on these findings, recommendations to improve the fire design of bridge girders have been proposed. For conventional concrete bridge girders, increasing cross-sectional size and limiting exposure of the web to the high temperatures can improve fire performance. Internal pressure and spalling can be reduced in UHPC members through addition of polypropylene fibers. Additionally, parameters for assessing the fire resistance of bridge girders, such as failure criteria and a bridge fire curve that accounts for the thermal gradient along the girder length were proposed. The developed numerical tool is also applied to analyze the fire-induced collapse of the I-95 overpass in Philadelphia on June 11, 2023.
Keywords: Concrete bridges, Fire safety, Bridge girders, Ultra-high performance concrete
Department: Civil and Environmental Engineering
Name: Peng Chen
Date Time: Thursday, June 27th, 2024 - 10:00 a.m.
Advisor: Dr. Karim Chatti and Dr. Bora Cetin
Accurately predicting strain responses under axle loadings is crucial for the design of flexible pavements using the mechanistic-empirical approach, especially within the prevalent Pavement ME methodology. These strains are directly used in pavement damage calculation and predicting distresses. The stiffness of the top layer of flexible pavement, asphalt concrete (AC), is influenced by both loading frequency and temperature due to its viscoelastic nature. Typically, the mechanistic behavior of AC is characterized by the dynamic modulus (E*) master curve, derived from laboratory tests under uniaxial sinusoidal loadings. While a full dynamic viscoelastic analysis can precisely predict critical strains, it is computationally demanding. Consequently, Pavement ME employs a layered linear-elastic analysis, relying on the concept of "equivalent loading frequency" to determine the elastic modulus of the AC layer under specific axle loadings. However, this method has limitations in accurately predicting critical strains within the AC layer.
This thesis introduces two novel frequency calculation methods: the "centroid of PSD" and the "equivalent frequency." The former computes frequency based on the weighted center of Power Spectral Density (PSD) of vertical stress pulses induced by axle loadings, while the latter iteratively adjusts frequency until it matches strains computed by dynamic viscoelastic analysis under moving loads. The accuracy of these methods, alongside the Pavement ME method, is evaluated against dynamic viscoelastic analysis results under moving loads.
Findings reveal that while Pavement ME underestimates surface strains, it provides reasonable predictions with increasing depth for single and multiple axle configurations. Differences in loading frequencies between axle configurations are highlighted, and a correction method based on pulse width and equivalent frequency is proposed. Finally, both the original and corrected frequencies are implemented in MEAPA software to predict long-term pavement distress for real projects in Michigan. The results show that the difference between bottom-up fatigue cracking predicted by the original and corrected Pavement ME frequencies is negligible. The corrected frequency yields higher rutting predictions compared to the original Pavement ME method, ranging from approximately 15% to over 20% for AC rutting and 5% to 10% for total rutting, depending on pavement structures and traffic volumes.
Department: Civil and Environmental Engineering
Name: Brijen Miyani
Date Time: Sunday, April 22nd, 2024 - 1:00 p.m.
Advisor: Dr. Irene Xagoraraki
The recent COVID-19 pandemic has highlighted the importance of wastewater-based-epidemiology (WBE) methods to effectively monitor and predict infectious viral disease outbreaks. Traditional disease detection systems rely on identification of infectious agents by diagnostic analysis of clinical samples, often after an outbreak has been established. Those surveillance systems are lacking in their ability to predict outbreaks, since it is impossible to test every individual in a community for all potential viral infections that may be emerging. Untreated wastewater may serve as a community-based sample that can be tested to identify the diversity of endemic and emerging human viruses prevalent in the community. WBE can help reduce the load of medical systems, guide clinical testing, and provide early warnings. This dissertation presents innovative screening tools based on molecular methods, high throughput sequencing, and bioinformatics analysis that can be applied in the analysis of wastewater samples to identify viral diversity in the corresponding catchment community. Further, population biomarker methods were developed to normalize the signals. The first chapter of the dissertation focuses on an application of a bioinformatics-based screening tool to reveal high abundance of rare human herpesvirus 8 in Detroit wastewater. The second chapter focuses on early warning of the COVID-19 second wave in Detroit MI. The third chapter focuses on surveillance of SARS-CoV-2 in nine neighborhood sewersheds in Detroit Tri-County area, United States and assessing per capita SARS-CoV-2 estimations and COVID-19 incidence. The fourth chapter uses molecular method to identify a wide variety of human viruses in Trujillo-Peru wastewater and confirms COVID-19, monkeypox, and diarrheal disease outbreaks. The fifth chapter reveals signals of polio 1 and 3 detected in municipal wastewater in Trujillo-Peru and discusses the implications of positives results in communities.
Department: Civil and Environmental Engineering
Name: Hao Dong
Date Time: Thursday, April 4th, 2024 - 1:30 p.m.
Advisor: Dr. Kristen Cetin
In the United States, the residential and commercial sectors have consumed increasingly more energy over the past 70 years. As the U.S. shifts towards a carbon-neutral electric grid, electrification using fossil fuel-free, renewable energy resources such as wind and solar will help to reduce greenhouse gas (GHG) emissions. To reduce the need for fossil fuels and utilize energy more efficiently, technologies and policies are introduced to help decrease the demand-side intensity of building sectors. Three issues are addressed in this research to support the goals of smart buildings or net energy-zero buildings (NEZB) to achieve human comfort and demand-side management (DSM): sensing technology sensitivity for smart building controls, occupants’ patterns and correlations in residential buildings, and appliance use in residential buildings.
First, there has been a lack of studies and guidance on the appropriate placement of various sensors within a building and how this sensor placement impacts building control performance. This research thus first investigates (i) how sensitive controls of buildings are to sensor placement, in particular, sensor location and orientation. Sensor placement impact analysis helps to investigate the impact on energy use and demand for an integrated lighting and shading control system. Second, various studies have shown that occupancy-related factors in energy modeling can create significant differences in building energy consumption. Human-related factors, especially occupants’ activities and behavior, are less well understood, especially in the wake of lifestyle changes that have occurred as a result of the pandemic. This research thus (ii) assesses and quantifies the changes to occupancy patterns and the relationship to the socioeconomic factors that have occurred due to the COVID-19 pandemic. Finally, the third topic focuses on demand-side management (DSM), which enables the ability to control the quantity and timing of electricity consumption. Approximately one-third of this consumption is from large appliances, many of which are occupancy-driven loads. Historically, energy use information for estimating the energy use of individual appliances has originated from a combination of field-collected and simulated data. However, this data originates from sources assessing pre-pandemic energy consumption patterns, thus there is a need to (iii) assess how energy use patterns of appliances have changed during and post-pandemic. This research thus helps to estimate demand reduction opportunities from the use of appliances in DSM applications.
Department: Computational Mathematics, Science and Engineering
Name: Joey Bonitati
Date Time: Friday, August 16th, 2024 - 12:00 p.m.
Advisor: Dean Lee
This thesis investigates quantum algorithms for eigenstate preparation, with a primary focus on solving eigenvalue problems such as the Schrödinger equation by utilizing near-term quantum computing devices. These problems are ubiquitous in several scientific fields, but more accurate solutions are specifically needed as a prerequisite for many quantum simulation tasks. To address this, we establish three methods in detail: quantum adiabatic evolution with optimal control, the Rodeo Algorithm, and the Variational Rodeo Algorithm.
The first method explored is adiabatic evolution, a technique that prepares quantum states by simulating a quantum system that evolves slowly over time. The adiabatic theorem can be used to ensure that the system remains in an eigenstate throughout the process, but its implementation can often be infeasible on current quantum computing hardware. We employ a unique approach using optimal control to create custom gate operations for superconducting qubits and demonstrate the algorithm on a two-qubit IBM cloud quantum computing device.
We then explore an alternative to adiabatic evolution, the Rodeo Algorithm, which offers a different approach to eigenstate preparation by using a controlled quantum evolution that selectively filters out undesired components in the wave function stored on a quantum register. We show results suggesting that this method can be effective in preparing eigenstates, but its practicality is predicated on the preparation of an initial state that has significant overlap with the desired eigenstate. To address this, we introduce the novel Variational Rodeo Algorithm, which replaces the initialization step with dynamic optimization of quantum circuit parameters to increase the success probability of the Rodeo Algorithm. The added flexibility compensates for instances in which the original algorithm can be unsuccessful, allowing for better scalability.
This research seeks to contribute to a deeper understanding of how quantum algorithms can be employed to attain efficient and accurate solutions to eigenvalue problems. The overarching goal is to present ideas that can be used to improve understanding of nuclear physics by providing potential quantum and classical techniques that can aid in tasks such as the theoretical description of nuclear structures and the simulation of nuclear reactions.
Department: Computational Mathematics, Science, and Engineering
Name: Tianyu Yang
Date Time: Friday, April 12th, 2024 - 1:00 p.m.
Advisor: Yang Yang
Ultrasound modulated bioluminescence tomography (UMBLT) is a technique for imaging the 3D distribution of biological objects such as tumors by using a bioluminescent source as a biomedical indicator. It uses bioluminescence tomography (BLT) with a series of perturbations caused by acoustic vibrations. UMBLT outperforms BLT in terms of spatial resolution. The current UMBLT algorithm in the transport regime requires measurement at every boundary point in all directions, and reconstruction is computationally expensive. In this talk, we will first introduce the UMBLT model in both the diffusive and transport regimes, and then formulate the image reconstruction problem as an inverse source problem using internal data. Second, we present an improved UMBLT algorithm for isotropic sources in the transport regime. Third, we generalize an existing UMBLT algorithm in the diffusive regime to the partial data case and quantify the error caused by uncertainties in the prescribed optical coefficients.
Department: Computer Science and Engineering
Name: Nicholas Polanco
Date Time: Thursday, December 5th, 2024 - 11:00 a.m.
Advisor: Betty H.C. Cheng
The increase of inward-facing and outward-facing communication used by modern vehicles with automated features expands the breadth and depth of automotive cybersecurity vulnerabilities. Furthermore, because of the prominent role that human behavior plays in the lifetime of a vehicle, social and human-based factors must be considered in tandem with the technical factors when addressing cybersecurity. A focus on informing and enabling stakeholders and their corresponding actions will promote security of the vehicle through a human-focused approach. The diverse stakeholders and their interactions with a modern day vehicle cover a spectrum of vulnerabilities that need to be secured. Example stakeholders include the consumer using the vehicle, the technicians working on the car, and the engineers designing the software. Stakeholder-aware strategies can be applied in both a social and technical manner to increase preventative security measures for autonomous vehicles. By leveraging theoretical foundations from the criminology domain, we create reusable social and technical stakeholder-based solutions applicable to the vehicle and its supporting infrastructures, that can be used by different stakeholders interacting with the vehicle. In this dissertation, we take an interdisciplinary approach to address automotive cybersecurity where we synergistically combine cybercrime theory, human factors, and technical solutions to develop reusable prevention and detection techniques.
Department: Computer Science and Engineering
Name: Vishal Asnani
Date Time: Tuesday, November 26th, 2024 - 8:30 a.m.
Advisor: Dr. Xiaoming Liu
Adversarial attacks in computer vision typically exploit vulnerabilities in deep learning models, generating deceptive inputs that can lead AI systems to incorrect decisions. However, proactive schemes approaches designed to embed purposeful signals into visual data can serve as “adversarial attacks for social good,” harnessing similar principles to enhance the robustness, security, and interpretability of AI systems. This research explores application of proactive schemes in computer vision, diverging from conventional passive methods by embedding auxiliary signals known as "templates" into input data, fundamentally improving model performance, attribution capabilities, and detection accuracy across diverse tasks. This includes novel techniques for image manipulation detection and localization, which introduce learned templates to accurately identify and pinpoint alterations made by multiple, previously unseen Generative Models (GMs). The Manipulation Localization Proactive scheme (MaLP), for example, not only detects but also localizes specific pixel changes caused by manipulations, showing resilient performance across a broad range of GMs. Extending this approach, the Proactive Object Detection (PrObeD) scheme utilizes encoder-decoder architectures to embed task-specific templates within images, enhancing the efficacy of object detectors, even under challenging conditions like camouflaged environments.
This research further expands proactive schemes into generative models and video analysis, enabling attribution and action detection solutions. ProMark, for instance, introduces a novel attribution framework by embedding imperceptible watermarks within training data, allowing generated images to be traced back to specific training concepts—such as objects, motifs, or styles—while preserving image quality. Building on ProMark, CustomMark offers selective and efficient concept attribution, allowing artists to opt into watermarking specific styles and easily add new styles over time, without the need to retrain the entire model. Inspired by the proactive structure of PrObeD for 2D object detection, PiVoT introduces a video-based proactive wrapper that enhance action recognition and spatio-temporal action detection. By integrating action-specific templates through a template-enhanced Low-Rank Adaptation (LoRA) framework, PiVoT seamlessly augments various action detectors, preserving computational efficiency while significantly boosting detection performance. Lastly, the thesis presents a model parsing framework that estimates "fingerprints” for the generative models, extracting unique characteristics from generated images to predict the architecture and loss functions of underlying networks—a particularly valuable tool for deepfake detection and model attribution. Collectively, these proactive schemes offer significant advancements over passive methods, establishing robust, accurate, and generalizable solutions for diverse computer vision challenges. By addressing key issues related to the different vision applications caused by conventional passive approaches, this research lays the groundwork for a future where proactive frameworks can improve AI-driven applications.
Department: Computer Science and Engineering
Name: Shivangi Yadav
Date Time: Friday, November 8th, 2024 - 10:30 a.m.
Advisor: Dr. Arun Ross
Synthetic biometric data – such as fingerprints, face, iris and speech – can overcome some of the limitations associated with the use of real data in biometric systems. The focus of this work is on the iris biometric. Current methods for generating synthetic irides and ocular images have limitations in terms of quality, realism, intra-class diversity and uniqueness. Different methods are proposed in this thesis to overcome these issues while evaluating the utility of synthetic data for two biometric tasks: iris matching and presentation attack (PA) detection.
Two types of synthetic iris images are generated: (1) partially synthetic and (2) fully synthetic. The goal of “partial synthesis” is to introduce controlled variations in real data. This can be particularly useful in scenarios where real data are limited, imbalanced, or lack specific variations. We present three different techniques to generate partially synthetic iris data: one that leverages the classical Relativistic Average Standard Generative Adversarial Network (RaSGAN), a novel Cyclic Image Translation Generative Adversarial Network (CIT-GAN) and a novel Multi-domain Image Translative Diffusion StyleGAN (MID-StyleGAN). While RaSGAN can generate realistic looking iris images, this method is not scalable to multiple domains (such as generating different types of PAs). To overcome this limitation, we propose CIT-GAN that generates iris images using multi-domain style transfer. To further address the issue of quality imbalance across different domains, we develop MID-StyleGAN that exploits the stable and superior generative power of diffusion based StyleGAN. The goal of “full synthesis” is to generate iris images with both inter and intra-class variations. In this regard, we propose two novel architectures, viz., iWarpGAN and IT-diffGAN. The proposed iWarpGAN focuses on generating iris images that are different from the identities in the training data using two transformation pathways: (1) Identity Transformation and (2) Style Transformation. On the other hand, IT-diffGAN projects input images onto the latent space of a diffusion GAN, identifying and manipulating the features most relevant to identity and style. By adjusting these features in the latent space, IT-diffGAN generates new identities while preserving image realism.
A number of experiments are conducted using multiple iris and ocular datasets in order to evaluate the quality, realism, uniqueness, and utility of the synthetic images generated using the aforementioned techniques. An extensive analysis conveys the benefits and the limitations of each technique. In summary, this thesis advances the state of the art in iris and ocular synthesis by leveraging the prowess of GANs and Diffusion Models.
Department: Computer Science and Engineering
Name: Ira Woodring
Date Time: Tuesday, October 29th, 2024 - 12:00 p.m.
Advisor: Dr. Charles Owen
Unified Modeling Language (UML) Class Diagramming is the commonly accepted mechanism used to describe relationships between software components. In addition, it is an essential educational tool that is used to convey the structure of software and the patterns of software design to students. Unfortunately, UML is a visual-only mechanism and therefore is not useful for developers and students who are blind or have visual impairments. This work describes a method for conveying class diagrams using audio, which addresses this lack of a tool to support these populations. This method works by rigidly dividing the views of a diagram into smaller spaces. Elements in these subspaces are conveyed through manipulation of audio properties. Multiple user studies were performed to prove that the tool is viable for conveying the static structure of software elements and that the workload required to use the tool is not too high. The results of the studies indicate that the tool is effective and requires only a slightly higher workload than traditional class diagrams.
Department: Computer Science and Engineering
Name: Aryan Tanmay Gupta
Date Time: Friday, October 11th, 2024 - 1:00 p.m.
Advisor: Dr. Sandeep Kulkarni
We currently see a steady rise in the usage and size of multiprocessor systems, and so the community is evermore interested in developing fast parallel processing algorithms. However, most algorithms require a synchronization mechanism, which is costly in terms of computational resources and time.
If an algorithm can be executed in asynchrony, then it can use all the available computation power, and the nodes can execute without being scheduled or locked. However, to show that an algorithm guarantees convergence in asynchrony, we need to generate the entire global state transition graph and check for the absence of cycles. This takes time exponential in the size of the global state space.
In this dissertation, we present a theory that explains the necessary and sufficient properties of a multiprocessor algorithm that guarantees convergence even without synchronization. We develop algorithms for various problems that do not require synchronization. Additionally, we show for several existing algorithms that they can be executed without any synchronization mechanism.
A significant theoretical benefit of our work is in proving that an algorithm can converge even in asynchrony. Our theory implies that we can make such conclusions about an algorithm, by only showing that the local state transition graph of a computing node forms a partial order, rather than generating the entire global state space and determining the absence of cycles in it. Thus, the complexity of rendering such proofs, formal or social, is phenomenally reduced.
Experiments show a significant reduction in time taken to converge, when we compare the execution time of algorithms in the literature versus the algorithms that we design. We get similar results when we run an algorithm, that guarantees convergence in asynchrony, under a scheduler versus in asynchrony. These results include some important practical benefits of our work.
Department: Computer Science and Engineering
Name: Hongzhi Wen
Date Time: Tuesday, August 6th, 2024 - 9:30 a.m.
Advisor: Dr. Jiliang Tang
The rapid advancement of single-cell technologies allows for simultaneous measurement of multiple molecular features within individual cells, providing unprecedented multimodal data through single-cell multi-omics and spatial omics technologies. This thesis addresses the complex challenges of modeling these multimodal interactions using deep learning techniques. We propose two series of studies: the first, scMoGNN and scMoFormer explores the application of graph transformers to model relations between multimodal features, incorporating external domain knowledge; the second, SpaFormer proposes a transformer-based framework for spatial transcriptomic data to extract cell context information. Despite the effectiveness of these models, their knowledge transferability across tasks and datasets remains limited. To overcome this, we introduce a new transformer-based foundation model, CellPLM, that encodes inter-cellular relations and multimodal features, demonstrating the significant potential for future research in single-cell biology.
Department: Computer Science and Engineering
Name: Shengjie Zhu
Date Time: Wednesday, July 31st, 2024 - 10:00 a.m.
Advisor: Dr. Xiaoming Liu
Recovering structure and motion from videos is a well-studied comprehensive 3D vision task that involves (1) image calibration, (2) two-view pose initialization, and (3) multi-view Structure-from-Motion (SfM). Prior arts are optimization-based methods built over sparse image correspondence inputs. This thesis develops systematic approaches to enhance classic solutions with deep learning models. We introduce EdgeDepth and PMatch for dense monocular depthmaps and dense binocular correspondence map estimations. Since classic approaches typically rely on sparse and accurate inputs, they are less suitable for the dense yet high-variance predictions from dense depth and correspondence models. As a solution, we propose to optimize through the robust inlier-counting-based scoring function, which is widely applied in RANdom SAmpling Consensus (RANSAC). (1) For image calibration, we introduce WildCamera. The system utilizes a RANSAC algorithm applied to a dense incidence field regressed by a deep model. It calibrates in-the-wild monocular images without checkerboard. (2) In two-view pose estimation, we introduce LightedDepth. It estimates the optimal pose by aligning the depth map with the correspondence map, maximizing the projective inliers. (3) The strategy is extended to a Hough Transform in RSfM for multi-view SfM over a local 3 to 9 frame system. Finally, we generalize the RSfM Hough Transform to the cumulative distribution function loss for large-scale SfM task. To this end, we formulate a comprehensive system that recovers structure and motion from two-view / local multi-view / large-scale multi-view images with dense monocular depthmap and binocular correspondence maps. Compared to prior arts, our methods show improved accuracy at two-view / local multi-view systems and show on-par accuracy at large-scale multi-view systems.
Department: Computer Science and Engineering
Name: Salman Ali
Date Time: Thursday, July 25th, 2024 - 2:00 p.m.
Advisor: Dr. Wolfgang Banzhaf
Complex supply chains such as the 'food supply chain' network involve diverse subsystems like stock management, feed harvesting, cold storage transportation and retail businesses. Throughout the food supply chain, major subsystems are owned by private organizations which inhibits sharing of potentially useful common information. This results in a lack of trust, traceability and a lost opportunity to share knowledge and optimize the chain for better economic and environmental outcomes.
Bringing together dispersed and disjoint supply chain participants to collaborate on common applications beyond the 'point-of-sale' communication channel comes with numerous technological and data restriction challenges, which necessitates the need for a generic, scalable and user-controlled collaboration framework.
This thesis takes on the challenge of learning common knowledge in disjoint and dispersed supply chains by proposing a decentralized and distributed supply chain connectivity and collaboration framework controlled and run by chain participants.
Using an example of the 'Beef Supply Chain', several useful applications including carbon emissions tracking, supply chain optimization and collaborative machine learning applications using secure data pipelines are presented. Through practical applications and system evaluation, the efficacy of the proposed framework is demonstrated for collaboration, policy sharing, traceability, federated machine learning, knowledge transfer and increased value for supply chain participants.
Department: Computer Science and Engineering
Name: Wentao Bao
Date Time: Thursday, July 18th, 2024 - 2:00 p.m.
Advisor: Dr. Yu Kong
Though we have witnessed waves of success in visual intelligence, teaching machines to understand visual content at the level of human intelligence remains a fundamental challenge. In past decades, visual understanding has been extensively explored through computer vision tasks such as object (or activity) recognition, segmentation, and detection. However, existing methods can hardly be deployed in real open-world applications, where unseen environments, objects, and activities inevitably appear in testing. Such a limitation is attributed to the closed-world assumption that ignores the unknown in model design, learning, and evaluation.
In this dissertation defense, I will introduce my works that go beyond the traditional closed-world visual understanding and tackle several challenging open-world problems. The goal is to endow machines with visual perception capabilities in an open world, where unseen environments, image objects, and video activities will be tackled. First, I will investigate open-world visual forecasting problems in an unseen perception environment, such as autonomous driving and virtual reality. Specifically, we are interested in how the early observed videos can be leveraged to promptly forecast the traffic accident risk for safe self-driving, and predict the 3D hand motion trajectory in an unseen first-person view. Second, I will cover the open-world visual recognition problems that aim to identify the unseen visual concepts. In this part, I am interested in identifying and localizing unseen video activities such as human actions in general videos. Lastly, I will delve into open-world visual language understanding problems that further recognize unseen visual concepts from language queries. Specifically, we are interested in understanding unseen compositional objects in images and spatiotemporally detecting unseen human actions.
Department: Computer Science and Engineering
Name: Austin Ferguson
Date Time: Friday, June 28th, 2024 - 3:00 p.m.
Advisor: Dr. Charles Ofria
While evolution has created a stunning diversity of complex traits in nature, isolating the details for how a particular trait evolved remains challenging. Specifically, what were the critical events in evolutionary history that made the particular trait more or less likely to arise? We must consider historical contingency, where even small changes, such as an apparently neutral mutation, can have substantial influence on long-term evolutionary outcomes. Evolutionary biologists have long been interested in the role that historical contingency plays in evolution, but testing hypotheses of its effects has traditionally been difficult and time consuming, if it is even possible at all.
Here I leverage the speed and power of digital evolution to experimentally test the role of historical contingency in evolution. I start by observing how the evolution of phenotypic plasticity stabilizes future evolutionary dynamics. Next, I employ analytic replay experiments to empirically test which mutations in a population’s history increased the likelihood that associative learning evolves, first as case studies and then using more statistically powerful experimental approaches. I demonstrate that single mutations can drastically increases the odds of learning appearing, shifting it from a rare possibility to a near inevitability, and I find that these “potentiating” mutations exist in all studied lineages. Finally, I use potentiating mutations to develop an intuitive view into how adaptive momentum increases evolutionary exploration in populations experiencing disequilibrium.
We are only beginning to scratch the surface of how historical contingency influences evolution, but digital evolution systems can expedite this process by testing these hypotheses and further refining these techniques for use in natural organisms. This work, and those like it, are pivotal in understanding how populations previously evolved, how their accumulated history currently affects them, and how they might evolve far into the future.
Department: Computer Science and Engineering
Name: Oyendrila Dobe
Date Time: Friday, June 21st, 2024 - 11:00 a.m.
Advisor: Dr. Borzoo Bonakdarpour
Formal verification ensures the correctness of systems with respect to user-specified requirements. My research explores the different aspects involved in verification, by model checking, of systems described at an abstract level as Markov models, against hyperproperties expressed in HyperPCTL. We represent systems as Markov models due to their flexibility in modeling uncertainty (in terms of nondeterminism, randomization, and partial observability), and their simplicity in using the current state to determine the future evolution of the system. HyperPCTL allows the expression of probabilistic hyperproperties. In general, hyperproperties are system-level requirements that can express properties related to security, privacy, robustness, efficiency, etc. Prominent examples include noninterference of secret inputs on publicly observable outputs, observational determinism of public outputs, optimal path planning in robotics, individual fairness in models, side-channel timing attacks, and conformance of different system versions.
Given this combination of model and properties, we extend the previously proposed logic HyperPCTL to express specifications involving nondeterminism and rewards, study the complexity of the general model checking problem for this logic, and propose constraint-based algorithms for the same, in a tool called HyperProb. The high complexity of this problem has further motivated our research on the development of fragment-specific algorithms that scale better and approximate statistical-based model-checking algorithms that extend the existing prominent tool PLASMA. We have further explored the parameter synthesis problem where assuming that a HyperPCTL property holds in a model, we synthesize valid values for unknown parameters in our models. Overall, my talk describes our research efforts in advancing the state-of-the-art in quantitative model checking of probabilistic hyperproperties in Markov models.
Department: Computer Science and Engineering
Name: Han Xu
Date Time: Monday, June 3rd, 2024 - 12:00 p.m.
Advisor: Jiliang Tang
When machine learning (ML) and artificial intelligence (AI) are applied in safety-critical tasks, such as autonomous vehicles or financial fraud detection, their reliability, especially under adversarial attacks, has become increasingly important. In order to enhance ML safety, it is essential to develop sound solutions for (1) identifying adversarial examples to uncover the weaknesses of models and (2) building robust models that can resist adversarial examples. In this talk, we will introduce some of our recent research findings in both directions. On the attack side, we will delve into our proposed attack algorithm that can achieve high efficiency and optimality, particularly in the discrete data domain, such as text data. On the defense side, we will address one important but frequently ignored weakness of adversarial training (one of the most popular strategies to improve model robustness), known as the “bias issue” of adversarial training. Motivated by these new findings and methodologies, we will also discuss potential future research directions as well as the social impacts of these research problems.
Department: Computer Science and Engineering
Name: Asadullah Hill Galib
Date Time: Thursday, May 30th, 2024 - 12:00 p.m.
Advisor: Pang-Ning Tan
The accurate modeling of extreme values in time series data is a critical yet challenging task that has garnered significant interest in recent years. The impact of extreme events on human and natural systems underscores the need for effective and reliable modeling methods. The proposed thesis aims to develop novel deep learning frameworks that can effectively model extreme events in time series data. The thesis introduces four novel deep learning frameworks: DeepExtrema, Self-Recover, SimEXT, and FIDE, which offer promising solutions for forecasting, imputation, representation learning, and generative modeling of extreme values in time series data. DeepExtrema focuses on integrating extreme value theory with deep learning formulation to improve the accuracy and reliability of extreme events forecasting. Self-Recover addresses data fusion challenges that arise from varying temporal coverage associated with long-term and random missing values of predictors. SimEXT explores how deep learning can be utilized to learn useful time series representations that effectively capture tail distributions for modeling extreme events. FIDE introduces a high-frequency inflation-based conditional diffusion model tailored towards preserving extreme value distributions within generative modeling. These frameworks are evaluated using real-world and synthetic datasets, demonstrating superior performance over existing state-of-the-art methods. The contributions of this research are significant in advancing the field of time series modeling and have practical implications across various domains, such as climate science, finance, and engineering.
Department: Computer Science and Engineering
Name: Hanqing Guo
Date Time: Tuesday, May 14th, 2024 - 9:30 a.m.
Advisor: Dr. Li Xiao
Voice, as a primary way for people to communicate with each other and interact with computers/smart devices, is expected to be secure and private when people use it. However, recent studies demonstrated the vulnerabilities of using voice to talk with people; conduct speaker authentication and deliver messages to smart devices. For example, the eavesdropper can record the conversation; the adversary can playback the speaker’s sound to attack the speaker authentication model; the hacker can craft fake speech to damage the reputation of the victim or launch impersonation scam; furthermore, the attacker can perform an adversarial voice attack to control the victim’s smart devices. This talk aims to understand the root cause of the vulnerabilities, address the challenges of achieving private and secure voice communication, and explore future directions to completely resolve the security concerns of the AI-enabled voice models and systems.
Department: Computer Science and Engineering
Name: Guangjing Wang
Date Time: Monday, May 13th, 2024 - 1:00 p.m.
Advisor: Dr. Qiben Yan
In the realm of the Internet of Things (IoT), users, devices, and environments communicate and interact with each other, creating a web of complex interactions. This interconnected web of interactions makes the IoT a powerful tool for enhancing human experiences. However, it simultaneously presents substantial challenges in ensuring security and privacy amid interactions among users, devices, and environments.
This dissertation investigates potential IoT interaction security and privacy issues by customizing data-centric AI algorithms. First, this dissertation studies complex interactions in smart homes where many interconnected smart devices are deployed. A graph learning-based threat detection system is designed to discover potential interactive threats across multiple smart home platforms. Second, considering smart home data privacy and data heterogeneity issues, a dynamic clustering-based federated graph learning framework is proposed to collaboratively train a threat detection model. Meanwhile, a Monte Carlo beam search-based method is designed to identify the interactive threat causes. Third, we explore the privacy issues behind the interactions between users and smartphones. Specifically, a potential bio-information leakage attack channel has been identified that utilizes near-ultrasound signals from a smartphone to recognize facial expressions based on a contrastive attention learning model. Fourth, we reveal two critical overprivileged issues in mobile activity sensing data generated from interactions between users and mobile devices: metadata-level and feature-level overprivileged issues. Correspondingly, we design the multi-grained data generation model to reconstruct mobile activity sensing data, so as to mitigate the privacy concerns behind the mobile sensing overprivileged issues.
We have implemented and extensively evaluated the proposed threat detection model, federated model training method, acoustic-based expression recognition model, and privacy-preserving data reconstruction model in practical settings. This dissertation concludes with a discussion of future work. We highlight the potential challenges and opportunities associated with the applied AI techniques for addressing security and privacy issues in the IoT. This dissertation points out the pathway for future research in enhancing security and privacy to safeguard the interactions among users, devices, AI, and environments.
Department: Computer Science and Engineering
Name: Hossein Rajaby Faghihi
Date Time: Tuesday, May 7th, 2024 - 11:00 a.m.
Advisor: Dr. Parisa Kordjamshidi
Reasoning over procedural text, which encompasses texts such as recipes, manuals, and 'how-to' tutorials, presents formidable challenges due to the dynamic nature of the world it describes. These challenges are embodied in tasks such as 1) tracking entities and their status changes (entity tracking) and 2) summarizing the process (procedural summarization).
This thesis aims to enhance the representation and reasoning over textual procedures by harnessing semantic structures in the input text and imposing constraints on the models' output. It delves into using semantic structures derived from the text, including relationships between actions and objects, semantic parsing of instructions, and the sequential structure of actions. Additionally, the thesis investigates the integration of structural and semantic constraints within neural models, resulting in coherent and consistent outputs that align with external knowledge. The thesis contributes significantly to three main areas: Entity tracking, Procedural Abstraction, and the Integration of constraints in deep learning.
In the entity tracking task, four primary contributions are made.
1) the development of a novel architecture that effectively encodes the flow of events within pretrained language models,
2) Seamless transfer learning from diverse corpora through task reformulation,
3) the enhancement of language models' by incorporating knowledge extracted from semantic parsers and leveraging ontological abstraction of actions, and
4) Creating a new evaluation scheme considering fine-grained semantics in tracking entities.
Regarding procedural summarization, the thesis proposes a model for an explicit latent space for the procedure that is indirectly supervised to ensure the summary's action order corresponds to the order of events in the multi-modal instructions.
In the realm of integrating domain knowledge with deep neural networks, the thesis makes two significant contributions,
1) it contributes to the development of a generic framework that facilitates the incorporation of first-order logical constraints in neural models, and
2) it creates a new benchmark for evaluating constraint integration methods across five categories of tasks. This benchmark introduces novel evaluation criteria and offers valuable insights into the effectiveness of constraint integration methods across various tasks
Department: Computer Science and Engineering
Name: Abdullah Alperen
Date Time: Friday, May 3rd, 2024 - 10:00 a.m.
Advisor: Dr. Hasan Metin Aktulga
Sparse matrix computations comprise the core component of a broad base of scientific applications in fields ranging from molecular dynamics and nuclear physics to data mining and signal processing. Among sparse matrix computations, the eigenvalue problem has a significant place due to its common use in the area of high performance scientific computing. In nuclear physics simulations, for example, one of the most challenging problems is solving large-scale eigenvalue problems arising from nuclear structure calculations. Numerous iterative algorithms have been developed to solve this problem over the years.
Lanczos and locally optimal block preconditioned conjugate gradient (LOBPCG) are two of such popular iterative eigensolvers. Together, they present a good mix of the computational motifs encountered in sparse solvers. With this work, we describe our efforts to accelerate large-scale sparse eigensolvers by employing asynchronous runtime systems, the development of hybrid algorithms and the utilization of GPU resources.
We first evaluate three task-parallel programming models, OpenMP, HPX and Regent, for Lanczos and LOBPCG. We demonstrate these asynchronous frameworks’ merit on two architectures, Intel Broadwell (a multicore processor) and AMD EPYC (a modern manycore processor). We achieve up to an order of magnitude improvement both in execution time and cache performance.
We then examine and compare a few iterative methods for solving large-scale eigenvalue problems arising from nuclear structure calculations. In particular, besides Lanczos and LOBPCG, we discuss the possibility of using block Lanczos method and the residual minimization method accelerated by direct inversion of iterative subspace (RMM-DIIS). We show that RMM-DIIS can be effectively combined with either block Lanczos and LOBPCG to yield a hybrid eigensolver that has several desirable properties.
We finally demonstrate the challenges posed by the emergence of accelerator-based computer architectures to achieve high performance for large-scale sparse computations. We particularly focus on the scalability of sparse matrix vector multiplication (SpMV) and sparse matrix multi-vector multiplication (SpMM) kernels of Lanczos and LOBPCG. We scale their performance up to hundreds of GPUs by improving their computation and communication aspect through hand-optimized CUDA kernels and hybrid communication methods.
Department: Computer Science and Engineering
Name: Steven Grosz
Date Time: Thursday, April 11th, 2024 - 1:30 p.m.
Advisor: Dr. Anil Jain
Fingerprint recognition is a long-standing and important topic in computer vision and pattern recognition research, supported by its diverse applications in real-world scenarios such as access control, consumer products, national identity, and border security. Recent advances in deep learning have greatly enhanced fingerprint recognition accuracy and efficiency alongside traditional hand-crafted fingerprint recognition methods, particularly in controlled settings. While state-of-the-art fingerprint recognition methods excel in controlled scenarios, like rolled fingerprint recognition, their performance tends to drop in uncontrolled settings, such as latent and contactless fingerprint recognition. These scenarios are often characterized by extreme degradations and image variations in the captured images. This performance drop is due to the inability of fingerprint embeddings (feature vectors obtained via deep networks) to generalize across variations in the captured fingerprint images between varying controlled and uncontrolled settings.
The challenges in the generalization of fingerprint embeddings, from controlled to uncontrolled settings, encompass issues such as insufficient labeled data, varying domain characteristics (often referred to as “domain gap"), and the misalignment of fingerprint features due to information loss. This thesis proposes a series of methods aimed at addressing these challenges in various unconstrained fingerprint recognition scenarios. We begin in chapter 2 with an examination of cross-sensor and cross-material presentation attack detection (PAD), where the sensing mechanism and encountered presentation attack instruments (PA) may be unknown. We present methods to augment the given training data to include a wider diversity of possible domain characteristics, while simultaneously encouraging the learning of domain-invariant representations. Next, we turn our attention in chapter 3 to the challenging scenario of contact to contactless fingerprint matching, where misaligned fingerprint features due to differences in contrast, perspective differences, and non-linear distortions are corrected via a series of deep learning-based preprocessing techniques to minimize the domain gap between contact and corresponding contactless fingerprint images. In chapter 4, we aim to improve the sensor-interoperability of fingerprint recognition by leveraging a diversity of deep learning representations, integrating convolutional neural network and attention-based vision transformer architectures into a single, multimodel embedding. Similarly, in chapter 5, we further improve the robustness and universality of fingerprint representations by fusing multiple local and global embeddings and demonstrate a marked improvement in latent to rolled fingerprint recognition performance, both in terms of accuracy and efficiency. Next, chapter 6 presents a method for synthetic fingerprint generation, capable of mimicking the distribution of real (i.e., bona fide) and PA (i.e., spoof) fingerprint images, to alleviate the lack of publicly available data for building robust fingerprint presentation attack detection algorithms. Finally, in chapter 7 we extend our fingerprint generation capabilities toward generating universal fingerprints of any fingerprint class, acquisition type, sensor domain, and quality, all to improve fingerprint recognition training and generalization performance across diverse scenarios.
Department: Computer Science and Engineering
Name: Declan McClintock
Date Time: Monday, April 8th, 2024 - 10:00 a.m.
Advisor: Dr. Charles Owen
Serious games research shows that games can increase engagement and improve learning outcomes over traditional instruction, but the impact of specific elements of serious games has yet to be fully explored across many contexts. Additionally, many existing intervention studies omit the details of the game design and development theory that informed the creation of the games used in the study. This abandons an important level of context surrounding why the games were successful and does a disservice to the field by not propagating useful design theory.
Two issues with existing game design theories are that they do not build fully on top of each other and that they leave out practical guidelines for their use in the design and development processes. This leads to further limiting the spread of useful design theory and limiting its impacts in industry and academia. The work in this thesis carefully outlines the influence of existing game design theory on the design and development of a game project built to study the impact of the narrative element of serious games. Additionally, this thesis builds a new framework aimed at being more comprehensive, easily built on top of, and with clear practical guidelines for its use. The main study in this thesis studies the engagement of students playing a single serious game with a cohesive narrative compared against multiple games without a narrative tying those games together. These two cases covered the same set of learning content and differ only in their narratives. The results suggest that either approach is likely to have the same results on engagement but that there is merit to explore learning outcomes further.
This study’s research is supported by design research explaining the design theory behind the games developed for and used in the experiment as well as more specific details of the games’ production. This allows the results to be understood within a larger serious game design and development context that will help inform future work. Additionally, this thesis expands on the lessons learned from the design research and criticisms of existing frameworks to produce the Iterative Game Design and Development framework (IGDD). IGDD provides a broader framework for game design and development with guidelines for its application in practice. The IGDD framework also provides an explanation for how it should be modified and built off of to both allow it to be used across many contexts and to allow future theory building to build collaboratively on top of previous works rather than adjacent to and in assumed competition with other design theory.
Department: Computer Science and Engineering
Name: Junwen Chen
Date Time: Thursday, April 4th, 2024 - 2:00 p.m.
Advisor: Yu Kong
Action recognition is a crucial aspect of video understanding, with considerable progress being made in studies based on curated short video clips. However, in real-world scenarios, videos are often long-form and untrimmed, providing continuous surveillance of our surroundings. Unfortunately, progress in action recognition for long-form videos lags behind. Unlike short-term videos that concentrate on a single action, the primary challenge in long-form videos lies in understanding multiple actions/events within the footage to perform complex reasoning.
In this thesis, I will introduce my research endeavors in developing models to comprehend long-form videos. The first part of the thesis delves into perceiving the rich dynamics in long-form videos. My research seeks to learn fine-grained motion representation across multiple actions/events over a long-horizon range, by exploiting the potential of multi-modal context. The second part focuses on leveraging the long-range dependencies of the events in boosting temporal reasoning downstream tasks. Finally, considering the wide applications of video models, we work on cultivating trustworthiness in the models for long-form videos from static bias mitigation and interpretable reasoning perspectives.
Department: Computer Science and Engineering
Name: Guangyue Xu
Date Time: Thursday, February 15th, 2024 - 12:00 p.m.
Advisor: Parisa Kordjamshidi
Humans learn concepts in a grounded and compositional manner. Such compositional and grounding abilities enable humans to understand an endless variety of scenarios and expressions. Although deep learning models have pushed performance to new limits on many Natural Language Processing and Computer Vision tasks, we still have a lack of knowledge about how these models process compositional structures and their potential to accomplish human-like meaning composition. The goal of this thesis is to advance the current compositional generalization research on both the evaluation and design of the learning models. In this direction, we make the following contributions.
Firstly, we introduce a transductive learning method to utilize the unlabeled data for learning the distribution of both seen and novel compositions. Moreover, we utilize the cross-attention mechanism to align and ground the linguistic concepts into specific regions of the image to tackle the grounding challenge.
Secondly, we develop a new prompting technique for compositional learning by considering the interaction between element concepts. In our proposed technique called GIPCOL, we construct a textual input that contains rich compositional information when prompting the foundation vision-language model. We use the CLIP model as the pre-trained backbone vision-language model and improve its compositional zero-shot learning ability with our novel soft-prompting approach.
Thirdly, since retrieval plays a critical role in human learning, our work studies how retrieval can help compositional learning. We propose MetaReVision which is a new retrieval-enhanced meta-learning model to address the visually grounded compositional concept learning problem.
Finally, we evaluate the large generative vision and language models in solving compositional zero-shot learning within the in-context learning framework. We highlight their shortcomings and propose retriever and ranker modules to improve their performance in addressing this challenging problem.
Department: Electrical and Computer Engineering
Name: Benjamin Sims
Date Time: Monday, January 20th, 2025 - 10:00 a.m.
Advisor: Dr. Sergey Baryshev
High-brightness injectors are key to improvements in UED, XFELs, and Laser Compton Back Scattering technologies as they increase their resolution, efficiency, and performance when used. Current advancements in cathode technologies and emittance compensation have provided substantial gains in brightness in recent years but additional approaches will be necessary to continue pushing to higher levels of brightness and resulting light source luminosity.
This dissertation discusses novel practical approaches and designs that can be implemented on various accelerators to improve their brightness. Chapter 2 focused on Space charge emittance and RF emittance management exampled using a canonical injector. Chapter 3 discusses implementing cathode retraction for in-situ intrinsic emittance measurement with the goal of decreasing emittance as well as ensuring desired cathode performance. Chapter 4 explores a novel multimode cavity design that focuses on bunch compression to increase the current of the bunch and thus the brightness.
Department: Electrical and Computer Engineering
Name: Xuhui Huang
Date Time: Friday, December 11th, 2024 - 9:00 a.m.
Advisor: Professor Yiming Deng
We explore the transformative potential of artificial intelligence (AI) and deep learning to enhance Structural Health Monitoring (SHM) and Nondestructive Evaluation (NDE). This research develops a novel framework integrating transfer learning, explainable AI techniques, data augmentation using generative models, and physics-informed deep learning approaches. It addresses critical challenges such as limited labeled data, nontransparent decision-making, and adaptability to varying operational conditions. By leveraging transfer learning and domain adaptation, the model effectively transfers knowledge from numerical models to experimental data, bridging the gap between modeling and real-world conditions. On the other hand, transferring knowledge through surrogate modeling involves simplifying complex physical phenomena to enable efficient forward prediction of response signals and solve inverse problems for determining defect geometry. Applied to Motion-Induced Eddy Current Testing (MIECT), surrogate models enable real-time monitoring and adaptive responses. In particular, we utilized Gaussian Process Regression to integrate high- and low-fidelity MIECT data, improving predictive accuracy, while an auto-compensation algorithm enhances Pulsed Eddy Current (PEC) measurements by mitigating electromagnetic interference. By exploring various deep learning architectures, we demonstrate and compare their capability to accurately localize and characterize acoustic emission sources. Integrating explainable AI techniques like Class Activation Mapping (CAM) and Gradient-weighted CAM (Grad-CAM) transforms deep learning into an interpretable methodology, enhancing transparent decision-making. This dual framework of deep learning and surrogate modeling significantly advances AI applications in NDE. Together, the dual framework of deep learning and surrogate modeling provides a comprehensive approach to improve the scalability, adaptability, and reliability of NDT technologies in dynamic environments.
Department: Electrical and Computer Engineering
Name: Akash Saxena
Date Time: Friday, December 6th, 2024 - 2:00 p.m.
Advisor: Professor Erin Purcell
Intracortical neural implants (ICNTs) are a powerful tool to treat and study neurological disorders. The performance of these implants depends on successful recording and stimulation for extended periods (up to years). This requires the recorded signal to remain consistent throughout implantation, or, from the perspective of providing stimulation, the stimulation with the same parameters should exhibit similar effects. This doesn’t hold true for ICNTs; the recorded signals exhibit intra-day variability, loss of signal quality, and potential desensitization to stimulation over chronic periods. The biological tissue response is a significant factor contributing to the loss of recording quality and signal instability for intracortical neural implants at chronic time points. Neuronal death and the presence of astrocytes around the implant are quantified to measure the strength of the tissue response to the implanted electrode. The usual trend observed is increasing neuronal death, the presence of astrocytes near the implant, and the formation of a glial sheath around the implant at chronic time points. The biocompatibility of available neural implants is primarily judged based on these two metrics. These metrics have guided various designs to reduce the tissue response, lowering both astrocytic and neuronal death and density around the implant. However, the tissue response is still triggered, and signal instability remains problematic. This leads us to believe that conventional metrics alone are insufficient in guiding implant design. Other metrics must be uncovered to complete the parameter space governing the biological tissue response to neural implants. The goal of this thesis is to create computational pipelines using signal processing, image processing, and data analysis methods to (1) better understand the interaction between the tissue and neural implant, (2) uncover variables that might affect the recording quality of the implant, and (3) potentially guide future neural implant design from the perspective of gene expression, metrics of extracellular recordings, and astrocyte morphology.
Department: Electrical and Computer Engineering
Name: Hassa Banna
Date Time: Tuesday, December 3rd, 2024 - 3:15 p.m.
Advisor: Professor Wen Li
Analysis of trace-level metals in environmental samples (e.g., soil, water, and plant samples) is essential for assessing environmental quality and food safety. This dissertation reports a non-toxic, eco-friendly, and cost-effective sensing method, capable of in-situ detection of microgram per liter (µg/L) levels of heavy metal ions in plant and soil solutions using carbon-based electrodes, including carbon fiber electrodes (CFEs) and boron-doped diamond electrodes (BDDs). The electrochemical behaviors of the CFEs and BDDs were characterized by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) measurements. As proof of principle, the CFEs and BDDs were validated for sensing selected heavy metals in buffer solutions as well as in extracted plant and soil solutions using differential pulse anodic stripping voltammetry (DP-ASV). The ideal pH range for heavy metal detection was also extensively investigated and was found to be between pH 4.0 and pH 5.0. Experimental results confirm that the CFEs were able to simultaneously measure cadmium (Cd), lead (Pb), and mercury (Hg) with a limit of detection (LOD) of 2.10 µg/L in buffer solution with an effective area (Aeff) of 0.123 cm2, showcasing good selectivity and sensitivity. On the other hand, the BDD electrodes showed simultaneous measurement of these metals with an LOD of 17.34 µg/L in buffer solution with Aeff of 0.122 mm2. Besides, BDD offers precise control over the fabrication by utilizing a microfabrication facility. Overall, the integration of these sensors with a microfluidics system lays a better foundation for long-term, in-situ, and stable electrochemical analysis for aqueous environment matrices.
Department: Electrical and Computer Engineering
Name: Wesley Spain
Date Time: Monday, November 18th, 2024 - 10:30 a.m.
Advisor: Dr. John Albrecht and Dr. Matthew Hodek
IC packaging is a critical factor in emerging next generation RF and mmWave systems design. As demand for higher data bandwidth and greater device connectivity increases, methods for developing low cost and high quality RF systems in the mmWave range and beyond must be developed and improved upon. Many traditional manufacturing techniques have been iterated on to address this issue, but most run into a hard limit in terms of RF performance and the ability to miniaturize heterogeneously integrated architectures into cost effective packages.
Additive manufacturing (AM) offers emerging processes that may be used to address these issues, providing solutions that are low operating cost and flexible to a wide range of design geometries. Some high performance designs that are difficult or unavailable with traditional manufacturing techniques may be realized using AM, extending the use of more robust IC packaging to high frequency applications.
This dissertation presents engineering advancements in the field of RF and mmWave systems manufacturing through the use of AM techniques. Chip-in-Pocket (CiP) IC packaging is investigated, including the impact of printed die fill materials and interconnects on RF system performance at Ku-band. Printed die attach techniques and their effect on the reliability of printed interconnects and die leveling are explored. Finally, a processes for transferring printed RF components and packages from the printing substrate to other surfaces will be demonstrated for Ku to Ka-band components as a means to improve manufacturing reliability of systems leveraging AM components and demonstrate the efficacy of combining AM components with traditional manufacturing. Aerosol-Jet Printing (AJP) is leveraged as the main AM method for high precision RF structures including IC interconnects and vias, all the way up to full IC packages that may be applied to PCB board assemblies.
Department: Electrical and Computer Engineering
Name: Pouria Tooranjipour
Date Time: Wednesday, October 16th, 2024 - 3:00 p.m.
Advisor: Dr. Bahare Kiumarsi
This dissertation develops high-performance safe control algorithms for autonomous systems under deterministic and stochastic uncertainties. The research is divided into two main parts: deterministic and stochastic control systems.
We focus on constructing safety certificates for unknown linear and nonlinear optimal control systems in the deterministic domain. We introduce an online method to develop control barrier certificates (CBCs) that expand the domain of attraction (DoA) without compromising performance. By formulating a feasible optimization problem using a relaxed algebraic Riccati equation (ARE) for linear systems and a relaxed Hamilton-Jacobi-Bellman (HJB) equation for nonlinear systems, alongside safety constraints, we identify the maximum barrier-certified region—called safe optimal DoA—where stability and safety coexist. To address the need for complete system dynamics knowledge, we propose an online data-driven approach employing a safe off-policy reinforcement learning algorithm, which learns a safe optimal policy while using a different exploratory policy for data collection.
Building upon these results, we incorporate disturbances using the $H_{\infty}$ control framework to attenuate unknown disturbances while ensuring safety and optimality. We unify the robustness of CBCs with $H_{\infty}$ control methods to construct a robust and safe optimal DoA. A feasible optimization problem is developed using the relaxed game algebraic Riccati equation (GARE), solved iteratively via a sum-of-squares (SOS)-based safe policy iteration algorithm. To demonstrate practical applicability, we develop a LiDAR-based model predictive control (MPC) framework that incorporates control barrier functions (CBFs). We reduce computational complexity by synthesizing CBFs from clustered LiDAR data and integrating them into the MPC framework while ensuring safety and recursive feasibility. We validate this approach through simulations and experiments on a unicycle-type robot.
In the stochastic domain, we synthesize risk-aware safe optimal controllers for partially unknown linear systems under additive Gaussian noise. By utilizing Conditional Value-at-Risk (CVaR) in the one-step cost function, we account for extremely low-probability events without excessive conservatism. Safety is guaranteed with high probability by imposing chance constraints. An online data-driven quadratic programming optimization simultaneously and safely learns the unknown dynamics and controls the system, tightening safety constraints as model confidence increases. We extend this framework to a fully risk-aware MPC for chance-constrained discrete-time linear systems with process noise, incorporating CVaR in both constraints and cost function. This approach ensures constraint satisfaction and performance optimization across the spectrum of risk assessments in stochastic environments. Recursive feasibility and risk-aware exponential stability are established through theoretical analysis.
Finally, we present a data-driven risk-aware MPC framework where the mean and covariance of the noise are unknown and estimated online. We provide a computationally efficient solution to the multi-stage CVaR optimization problem using dual representations and data-driven ambiguity sets, casting it as a tractable semidefinite programming (SDP) problem. Recursive feasibility and risk-aware exponential stability are demonstrated, with numerical examples illustrating the efficacy of the proposed methods.
Overall, this dissertation addresses challenges in unknown dynamics, disturbances, risk assessment, and computational tractability, providing robust and efficient solutions for safe optimal control in both deterministic and stochastic settings.
Department: Electrical and Computer Engineering
Name: Xinda Qi
Date Time: Monday, August 5th, 2024 - 12:00 p.m.
Advisor: Dr. Xiaobo Tan
Soft robots are developed and studied for their safety and adaptability in various applications. Compared to their rigid counterparts, soft robots can use their deformable bodies to adapt to challenging environments and tolerate collisions and inaccuracies. Natural animals, due to their intrinsic softness, have become popular inspirations for many soft robots whose designs are influenced by biological structures.
Snakes, known for their adaptability and flexibility, inspire the development of limbless mobile robots for tasks in complex environments. In this work we first propose a novel pneumatic soft snake robot that uses traveling-wave deformation to navigate complex, constrained environments, such as pipeline systems. The unique pneumatic system in the modular snake robot generates traveling-wave deformation with only four independent air channels. Experimental results show good agreement with finite element modeling (FEM) predictions and demonstrate the robot's adaptability in complex pipeline systems. Additionally, a spiral-type soft snake robot is proposed for more robust locomotion in constrained environments, utilizing rotated helix-like deformation for propulsion.
Besides the locomotion in constrained environments, we develop a 3D-printed multi-material snakeskin with orthotropic frictional anisotropy, inspired by real snakeskin, to enable undulatory slithering of the robot on planar rough surfaces. This snakeskin comprises a soft base with embedded rigid scales, mimicking real snakeskin. The designs generate various frictional anisotropies that propel the robot during serpentine locomotion. Experiments show effective serpentine locomotion on artificial and outdoor surfaces like canvas and grass.
Given the complexity of the dynamic model of the snake robot's serpentine locomotion, a model-free reinforcement learning approach is chosen for integrated locomotion and navigation. We propose Back-stepping Experience Replay (BER) to enhance learning efficiency in systems with approximate reversibility, reducing the need for complex reward shaping. BER is used in the soft snake robot's locomotion and navigation task, with a dynamic simulator assessing the algorithms' effectiveness. The robot achieves a 100% success rate in learning, reaching random targets 48% faster than the best baseline approach.
In addition to mobile robots, bio-inspired soft robots have been proposed for robotic manipulators, enabling safe and robust interactions with humans and delicate objects. Inspired by octopus tentacles, we design a multi-section cable-driven soft robotic arm with novel kinematic modeling. An analytical static model captures the interaction between the actuation cable and the soft silicone body, and in particular, the transversal deformation effect. Experiments show that the soft robotic arm has high flexibility and a large workspace, and that the proposed model outperforms a baseline model in robot behavior prediction and open-loop tracking control.
Department: Electrical and Computer Engineering
Name: Jitendra Thapa
Date Time: Monday, July 29th, 2024 - 10:00 a.m.
Advisor: Dr. Mohammed Ben-Idris
Traditional power systems are transitioning toward more sustainable electricity generation and supply systems. One of the major contributors toward this transition is the increased penetration of renewable energy resources which help to promote clean energy production, diversify the energy mix, reduce carbon emissions, and so on. However, the trend of increasing renewable energy resources has started to disrupt the conventional paradigm of power system operations. Therefore, modern electric utilities are concerned and looking for solutions to integrate these resources without disturbing the security and reliability of their existing systems.
Along with the rise of renewable energy resources and the retirement of conventional generation, distributed energy resources (DERs) are becoming more prevalent in modern electric grids. DERs are small-scale resources connected at the medium and low voltage distribution networks, which include, but are not limited to, photovoltaics (PV), wind, battery energy storage, and microturbines. With the evident expectation of heavy penetration of DERs in the near future, it has become more important than ever before to enable DERs to provide ancillary grid services. In this regard, DERs can be used independently or through aggregation to provide ancillary services to the grid. Though the contribution of a single DER or a distribution system consisting of multiple DERs to grid services may not be significant, stacked and coordinated contributions from several active distribution systems or aggregators can provide frequency regulation and other grid services at scale. However, the large-scale integration of DERs poses challenges in the planning, operation, and management of an existing power grid. These challenges call for developing a framework that provides avenues for their large-scale integration and assists in employing them for ancillary grid services. In this context, FERC Order 2222 has also established standards to enable and promote the participation of behind-the-meter DERs for several grid services. Whereas the regulations have been formulated, the practical challenges associated with their integration and adoption for ancillary grid services are still a concern for electric power utilities.
This dissertation addresses these critical challenges by developing a comprehensive framework and real-time control strategy to coordinate and optimally dispatch DERs and utility-scale resources for one of the important ancillary grid services, which is secondary frequency regulation. The study is conducted from the perspective of designing a novel mathematical model for implementing secondary frequency regulation at both distribution and transmission levels. A deep reinforcement learning-based strategy is proposed that effectively manages the diverse portfolios of resources, effectively handles the complexities associated with diverse characteristics, and accurately dispatches the resources for Automatic Generation Control (AGC). Furthermore, serverless cloud computing architecture and grid response time analysis are conducted for practical deployment of the proposed secondary frequency control algorithm in real field. Moreover, a comprehensive framework is developed to build electromagnetic transient (EMT) model of large-scale power grid that can be used to validate the proposed secondary frequency control on accurate power system models in real time. In addition, the proposed serverless cloud computing architecture along with integration of simulation in real-time digital simulator (RTDS) provides high fidelity prototype for practical deployment of secondary frequency regulation and is also flexible to implement for other power system control problems.
The results, mathematical models, and large-scale power system models proposed in this study provide major advances and important insights to enable active distribution network consisting of DERs and utility-scale resources for secondary frequency regulation.
In summary, the thesis presented here is that, distributed energy resources, when properly coordinated and controlled, can provide frequency regulation and control at scale.
Department: Electrical and Computer Engineering
Name: Ciaron Nathan Hamilton
Date Time: Wednesday, July 3rd, 2024 - 12:00 p.m.
Advisor: Dr. Yiming Deng
Nondestructive Evaluation (NDE) 4.0 is an emerging approach for providing automation towards material inspection using innovative techniques from Industry 4.0. Such innovative approaches allow for vast data acquisition and analysis potential for physical component assessments that require inspection, or else risk structural failure. Inspection for conductive materials is possible from surface scanning procedures, such as Eddy current testing (ECT). ECT utilizes electromagnetic induction to find defects in conductive materials. In the case of this dissertation, corrosion may be detected with ECT before it continues to grow and damage larger components. Corrosion is “the cancer” of metallics, costing billions in irreversible damages annually. In some instances, corrosion may occur under paints, which may be near invisible through visual inspection. ECT in place may be used, however many components need fast and robust scanning procedures. Fast scanning can be enabled with Eddy current arrays (ECAs), allowing repeated coils that may be used to increase scan areas or cut down scan times, a procedure like a paint brush that obtains information about the material’s health. ECAs also allow for different configurations that may be beneficial for data analysis, such as differential scanning mode. Inspection may be automated using robotic arms systems equipped with ECA, allowing for fast, repeatable, and robust scanning. This may be useful in situations with large components that may be brought into a "robot arm sensor wash" system, such as automobiles or military vehicles. One barrier for enabling robust “freeform” scanning is obtaining the scan path which the ECA will glide along, as components may come in different shapes and sizes, sometimes with curved or complex geometries. The focus of this dissertation is to provide NDE 4.0 techniques along with ECA to detect corrosion along curved steel sheets. NDE 4.0 techniques show capabilities merging cyber-physical systems (CPS), computer vision, and the concept of digital-twins between physical and digital space. To enable NDE 4.0 for robotic inspection, a framework was developed, which has five major steps: obtain a reconstruction of the physical object and surrounding environment, orient this virtual scene with respect to the robot’s base frame, generate a toolpath which the NDE probe will be manipulated, conduct the ECA scan with 6-degrees-of-freedom (6-DOF), and process the NDE results. A novel algorithm was developed, “ray-triangle intersection arrays,” which enables pathing on meshes from a raster pattern. The framework used was designed to be generalized for any surface scanning probe, in which UT scanning for carbon fiber inspection is also demonstrated using the same framework. For ECA, it is important to keep the probe close to the surface while ensuring the distance between the sensor and the probe, or lift-off, is minimized. For the scale of the defects obtained, which is approximately 0.05mm in depth at max, otherwise minor tilts of the probe become significant.
The ECA probe contains 32 channels and was operated at 500khz using absolute mode scanning, allowing for exceedingly small defect depths to be detected. The effects of ECA scanning using a robot system are examined, showing that tilt errors from either the path-planning procedure or even the calibration or the robot will provide significant errors. To better understand the effects per coil, a “full” scan mode was examined, showing a larger image per coil, as well as the typical painting scan considered as a “fast” scan. Other errors such as heating were also examined. With knowledge of the errors from robotic scanning, post-processing procedures were developed to minimize errors. A novel algorithm “array subtraction” was developed to reduce lift-off from common factors seen in every coil, indicating prob tilt error. A digital microscope was used to compare defects ground-truth defect volume with the ECA results, in which defect versus background intersection masking was used. Three hypotheses discussed cover the generalized robust surface scanning framework, the dissection of effects of robotic scanning for ECA for corroded surfaces, and how to process and interpret this ECA data. The results show promising future applications for robust surface scanning as corrosion is decently detected. Future applications would be the previously mentioned carwash system, AI-enabled detection, and mobile platforms to expand on inspection workspaces.
Department: Electrical and Computer Engineering
Name: Hrishikesh Dutta
Date Time: Tuesday, May 7th, 2024 - 3:00 p.m.
Advisor: Dr. Subir Biswas
The proliferation of Internet-of-Things (IoTs) and Wireless Sensor Networks (WSNs) has led to the widespread deployment of devices and sensors across various domains like wearables, smart cities, agriculture, and health monitoring. These networks usually comprise of resource-constrained nodes with ultra-thin energy budget. As a result, it is important to design network protocols that can judiciously utilize the available networking resources while minimizing energy consumption and maintaining network performance. The standardized protocols often underperform under general conditions because of their inability to adapt to changing networking conditions, including topological and traffic heterogeneities and various other dynamics. In this thesis, we develop a novel paradigm of learning-enabled network protocol synthesis to address these shortcomings.
The key concept here is that each node, equipped with a Reinforcement Learning (RL) engine, learns to find situation-specific protocol logic for network performance improvement. The nodes’ behavior in different heterogeneous and dynamic network conditions are formulated as a Markov Decision Process (MDP), which is then solved using RL and its variants. The paradigm is implemented in a decentralized setting, where each node learns its policies independently without centralized arbitration. To handle the challenges of limited information visibility in partially connected mesh networks in such decentralized settings, different design techniques including confidence-informed parameter computation and localized information driven updates, have been employed. We specifically focus on developing frameworks for synthesizing access control protocols that deal with network performance improvement from multiple perspectives, viz., network throughput, access delay, energy efficiency and wireless bandwidth usage.
A multitude of learning innovations has been adopted to explore the protocol synthesis concept in a diverse set of MAC arrangements. First, the framework is developed for random access MAC setting, where the learning-driven logic is shown to be able to minimize collisions with a fair share of wireless bandwidth in the network. A hysteresis-learning enabled design is exploited for handling the trade-off between convergence time and performance in a distributed setting. Next, the ability of the learning-driven protocols is explored in TDMA-based MAC arrangement for enabling decentralized slot scheduling and transmit-sleep-listen decision making. We demonstrate how the proposed approach, using a multi-tier learning module and context-specific decision making, enables the nodes to make judicious transmission/sleep decisions on-the-fly to reduce energy expenditure while maintaining network performance. The multi-tier learning framework, comprising of cooperative Multi-Armed Bandits (MAB) and RL agents, solve a multidimensional network performance optimization problem. This system is then improved from scalability and adaptability perspective by employing a Contextual Deep Reinforcement Learning (CDRL) framework. The energy management framework is then extended for energy-harvesting networks with spatiotemporal energy profiles. A learning confidence parameter-guided update rule is developed to make the framework robust to unreliability of RL observables. Finally, the thesis investigates protocol robustness against malicious agents, thus demonstrating versatility and adaptability of learning-driven protocol synthesis in hostile networking environments.
Department: Electrical and Computer Engineering
Name: Ehsan Ashoori
Date Time: Monday, May 6th, 2024 - 11:00 a.m.
Advisor: Dr. Andrew Mason
Assistive technologies have emerged as powerful tools for assessing physical health and wellness through monitoring physiological parameters such as movement and heart rate. However, our overall health is influenced not only by physiological parameters but also by mental health factors and environmental influences. Therefore, in the pursuit of holistic wellness, assistive technologies need to support multimodal sensing to monitor various aspects of individuals' health, including physiological health, mental wellness, and environmental parameters that influence personal health and wellness. The challenges arise when these technologies must be implemented in real-time and in miniaturized point-of-care platforms where multi-modal sensing algorithms must run efficiently, and resources, including power, are limited. Solving these challenges requires converging engineering practices with psychological and physiological principles. This work aims to implement resource-efficient algorithms to assess social interaction parameters as an important mental health factor and to enable high-performance point-of-care devices to monitor physiological and environmental parameters in a miniaturized and effective manner. In this work, an extensive dataset for human interaction in virtual settings was prepared. Efficient algorithms were developed to identify levels of two highly important social interaction parameters, ‘affect’ and ‘rapport’. We analyzed affect in time intervals based on the conversation turns and analyzed rapport in 30-second time intervals, which is the highest temporal resolution reported in the literature. We achieved an affect prediction accuracy of 77% and a rapport prediction accuracy of 72%, which are the highest reported results for analyzing multi-person groups. Furthermore, to support monitoring physiological and environmental parameters, electrochemical solutions were identified as a highly effective method. We introduced new architecture to overcome limited supply potentials in modern point-of-care devices. In our novel design, the potential window for electrochemical reactions doubles compared to the traditional designs. This, in return, facilitates a significantly wider range of target elements that can be monitored with this novel architecture. Overall, the enhanced algorithms and architecture introduced in this work enable multimodal sensing of important personal health and wellness parameters.
Department: Electrical and Computer Engineering
Name: Yu Zheng
Date Time: Friday, April 5th, 2024 - 8:30 a.m.
Advisor: Dr. Mi Zhang
The significant progress of deep learning models in recent years can be attributed primarily to the growth of the model scale and the volume of data on which it was trained. Although scaling up the model with sufficient training data typically provides enhanced performance, the amount of memory and GPU hours used for training provides great challenges for deep learning infrastructures. Another challenge for training a good deep learning model is the quantity of the data it was trained on. To achieve state-of-the-art performance, it has become a standard way to train or fine-tune deep neural networks on a dataset augmented with well-designed augmentation transformations. This introduces difficulties in efficiently identifying the best data augmentation strategies for training. Furthermore, there has been a noticeable increase in the dataset size across many learning tasks, making it the third challenge of modern deep learning systems. The dataset size becomes very large posing great burdens on storage and training cost. Moreover, it can be prohibitive to perform hyperparameter optimization and neural architecture search on networks trained on such massive datasets.
In this dissertation, we address the fist challenges from a model-centric perspective. We propose MSUNet, which is designed with four key techniques: 1) ternary conv layers, 2) sparse conv layers, 3) quantization and 4) self-supervised consistency regularizer. These techniques allow faster training and inference of deep learning models without sacrificing significant accuracy loss. We then look at deep learning systems from a data-centric perspective. To deal with the second challenge, we propose Deep AutoAugment (DeepAA), a multi-layer data augmentation search method which aims to remove the need of crafting augmentation strategies manually. DeepAA fully automates the data augmentation process by searching a deep data augmentation policy on an expanded set of transformations. We formulate the search of data augmentation policy as a regularized gradient matching problem by maximizing the cosine similarity of the gradients between augmented data and original data with regularization. To avoid exponential growth of dimensionality of the search space when more augmentation layers are used, we incrementally stack augmentation layers based on the data distribution transformed by all the previous augmentation layers. DeepAA achieves the best performance compared to existing automatic augmentation search methods evaluated on various models and datasets. To tackle the third challenge, we proposed a dataset condensation method by distilling the information from a large dataset to a small condensed dataset. The data condensation is realized by matching the training trajectories of the original dataset with that of the condensed dataset. Experiments show that our proposed method outperforms the baseline methods. We also demonstrate that the method can benefit continual learning and neural architecture search.
Department: Electrical and Computer Engineering
Name: Daniel Chen
Date Time: Thursday, April 4th, 2024 - 9:00 a.m.
Advisor: Dr. Jeffrey A. Nanzer
The need for fast and reliable sensing at millimeter-wave frequencies has been increasing dramatically in recent years for a wide range of applications including non-destructive evaluation, medical imaging, and security screening such as concealed contraband detection. Imaging based approaches have been of particular interest since the wavelengths at millimeter-wave frequencies provide good resolution and are capable of propagating through clothing with negligible attenuation allowing the identification of concealed contraband. While various implementations for millimeter-wave imaging have been developed, the new technique of active incoherent millimeter-wave (AIM) imaging, developed in our research group, is of particular interest because it solves fundamental limitations inherent in other approaches. Furthermore, AIM enables imaging with significantly fewer elements than phased arrays and costs less than passive imagers. This is enabled by actively transmitting noise signals, allowing the system to capture scene information in the spatial Fourier domain. When the received signal at each of the array elements are spatio-temporally incoherent, the spatial coherence function of the captured signals represent samples of the measured visibility which can be further processed via an inverse Fourier transformation to recover the measured scene. With a good quality recovered image, additional processing can be applied for detection and/or classification on specific spatial features. However, images often contain more than the required information necessary for effective classification results which means that unnecessary resources are used for the collection and processing of redundant information.
In this dissertation, I present on the design and analysis of array dynamics for radar and remote sensing applications. Specifically, I investigate approaches to measure specific spatial Fourier information which can be useful for direct classification therefore eliminating the need of full image recovery. I present an adapted formulation of the spatial coherence function by considering individual antenna trajectories within a dynamic antenna array. The measured visibility, hence, becomes a function of array trajectory over a slow time dimension. The use of array dynamics further reduces the hardware requirements in the AIM technique by introducing a new degree of freedom in the array design. By allowing the receiving elements of the antenna array to dynamically move across the measurement plane, the spatial Fourier domain can be efficiently sampled using as few as two receiving antennas. Discussion of the effects of trajectory approaches on the measured spatial Fourier information are presented. Furthermore, I expand on a specific array trajectory where as few as two antennas can generate a ring filter (i.e., spatial Fourier sampling function exhibiting a form of a ring) that can efficiently identify spatial Fourier artifacts pertaining to sharp edges in the scene. This approach enables an imageless approach to differentiate scenes containing objects with sharp-edge that are generally made artificially. I then present a real-time rotational dynamic antenna array operating at 75 GHz with two noise-transmitting sources as required by the AIM technique and two receivers to generate the ring filter. Compared to traditional millimeter-wave imaging, this non-imaging approach further reduces the required number of antennas. Experimental measurements using the AIM based rotational dynamic antenna array demonstrate the possibility of detecting concealed contraband via the direct measured spatial Fourier domain information.
Department: Mechanical Engineering
Name: Mohammed Mizanur Rahman
Date Time: Wednesday, January 15th, 2025 - 1:00 p.m.
Advisor: Dr. Andre Benard
Plate heat exchangers (PHEs) are extensively used in various thermal systems due to their compact designs, high heat transfer coefficients, and superior scalability compared to other heat exchanger types. However, their performance often deteriorates due to uneven fluid distribution among channels, leading to non-uniform heat transfer and increased pressure drops. Performance enhancements can be achieved through the redesign of in-plane flow structures (fins) and modifications to header configurations. This study introduces novel three-dimensional twisted S-shaped fins to enhance thermal performance and presents comprehensive reduced-order thermo-hydraulic models to investigate flow maldistribution and rapidly optimize PHE designs for various header shapes.
The first part of this dissertation presents a PHE design incorporating three-dimensional twisted S-shaped fins, fabricated using additive manufacturing technology. These fins promote controlled fluid swirl and enhance heat transfer. Turbulent conjugate heat transfer simulations are conducted to assess the thermal and hydraulic performance of the proposed configurations. By systematically varying mass flow rates and fin geometries, an optimized design suitable for high-temperature, high-pressure applications is identified.
The second part of the study addresses flow maldistribution in PHEs caused by suboptimal header design. Computational Fluid Dynamics (CFD) analyses are conducted on PHEs with both straight and tapered header configurations to identify the optimal header design for achieving uniform flow distribution. While the introduction of a tapered header can reduce the recirculation zone observed in straight headers, contrary to existing research, the study reveals that tapered headers can increase flow maldistribution compared to straight headers. However, these CFD analyses are computationally intensive, making it challenging to identify conditions where tapered headers are advantageous.
To significantly reduce computational expenses, a reduced-order model is developed to rapidly assess the potential impact of tapered headers. This model, validated against existing research, is capable of estimating both flow distribution and pressure drop within PHEs with minimal computational resources. Key structural parameters such as header diameter, number of channels, channel area, and taper ratio are identified as critical factors influencing flow distribution. These parameters play a crucial role in determining the choice between tapered and uniform headers. One of the most significant findings is the identification of the range of ζ values, representing flow resistance inside the channels, where tapered headers provide more uniform flow compared to straight headers. The predictive modeling framework is further extended to more complex header geometries, including parabolic and hyperbolic shapes, thereby advancing the understanding of fluid distribution in complex geometries and contributing to the design of more efficient, reliable, and cost-effective PHEs.
Finally, a comprehensive heat transfer model developed for PHEs is integrated with the predictive model. The resulting thermo-hydraulic model incorporates the role of header configuration in flow maldistribution and constitutes a tool for selecting appropriate structural parameters. This integrated model enables rapid evaluation of the impact of flow maldistribution on the effectiveness of PHEs without extensive computational resources. Overall, this dissertation contributes a novel design framework for PHEs, supporting applications in sustainable energy systems and industrial processes.
Department: Mechanical Engineering
Name: Amin Vahidimoghaddam
Date Time: Thursday, December 5th, 2024 - 8:30 a.m.
Advisor: Dr.Zhaojian Li
Nonlinear optimal control schemes have achieved remarkable performance in numerous engineering applications; however, they typically require high computational cost, which has limited their use in real-world systems with fast dynamics and/or limited computation power. To address this challenge, neighboring extremal (NE) has been developed as an efficient optimal adaption strategy to adapt a pre-computed nominal control solution to perturbations from the nominal trajectory. The resulting control law is a time-varying feedback gain that can be pre-computed along with the original optimization problem, which makes negligible online computation. This thesis focuses on reducing the computational cost of the nonlinear optimal control problems using the NE in two parts. In Part I, we tackle model-based nonlinear optimal control and propose an extended neighboring extremal (ENE) to handle model uncertainties and reduce computational cost. Nonlinear Model predictive control (NMPC), which explicitly deals with system constraints, is considered as the case study due to its popularity but the ENE can be easily extended to other model-based nonlinear optimal control schemes. In Part II, we address data-driven nonlinear optimal control and introduce a data-enabled neighboring extremal (DeeNE) to remove parametric model requirement and reduce the computational cost. As a pure data-driven optimal and safe controller, data-enabled predictive control (DeePC) makes a transition from the model-based optimal control to a data-driven one such that it seeks an optimal control policy from raw input/output (I/O) data without encoding them into a parametric model and requiring system identification prior to control deployment. The DeePC is considered as the case study, but the DeeNE can be easily extended to other data-driven nonlinear optimal control approaches. We also develop an adaptive DeePC and implement the DeeNE on a real-world arm robot.
Department: Mechanical Engineering
Name: Haritha Naidu Mullagura
Date Time: Monday, November 11th, 2024 - 11:00 a.m.
Advisor: Dr. Seung Baek
Pulmonary arterial hypertension (PAH) is a progressive and multifactorial disease characterized by pathological vascular remodeling, metabolic shifts, and dysregulation of key pathophysiological pathways. Predicting patient-specific responses to treatment requires a detailed understanding of pulmonary arterial mechanics, particularly the complex interactions between vascular geometry, hemodynamics, and pharmacological effects. However, most existing computational models are centered on healthy vasculature and fail to incorporate the influence of pharmacological treatment pathways in diseased states. To bridge this gap, we have developed a novel computational framework: a bio-chemomechanical model that integrates the essential biomechanical features of PAH-affected arteries and predicts arterial responses to various therapeutic interventions.
Our research group has previously established a healthy pulmonary arterial vasculature model using a homeostatic optimization process, an extension of Murray’s law. This optimization minimizes the total energy required to maintain blood flow, accounting for viscous dissipation, metabolic costs, and mechanical equilibrium constraints. By doing so, it generates a geometrically and energetically optimized arterial tree representative of a healthy physiological state. However, in contrast to the healthy vasculature model, which is the result of the optimization of metabolic energy consumption, there have been a growing body of evidence that the homeostatic stress status and metabolic energy consumption of residential cells are altered during the progression of PAH. For instance, studies have shown that pulmonary artery smooth muscle cells (PASMCs) in PAH shift towards glycolysis, even in the presence of oxygen, a phenomenon known as the Warburg effect. Mitochondrial dysfunction reduced oxidative phosphorylation, and decreased ATP production further disrupt energy dynamics in PAH-affected cell. Additionally, the upregulation of hypoxia-inducible factor (HIF) in PAH patients triggers cellular responses that promote vascular remodeling and metabolic shifts.
Therefore, rather than utilizing metabolic optimization, we create an in-silico PAH model using a data-driven approach, i.e., the work relies on the experimental data to inform its structural and functional changes, reflecting the complexity of the disease. Specifically, starting from the healthy model, we incorporate changes in geometry, hemodynamics, and pathological factors derived from experimental studies on PAH. Given the limited availability of metabolic cost data specific to PAH, we propose a set of testing hypotheses that computes metabolic energy consumption in the diseased vasculature, which enhance our understanding the role of metabolic process alteration by using existing literature.
Once the biomechanical structure of the PAH vasculature is established, we conduct an in-depth study of the chemical pathways involved in PAH treatment. This includes the development of mathematical models for key signaling pathways such as the nitric oxide-cGMP-PKG pathway, which plays a pivotal role in smooth muscle cell relaxation and vasodilation. Additionally, we perform pharmacokinetic analyses on various drugs, including PDE5 inhibitors, and Sotatercept, to evaluate their effects on the vasculature.
The resulting bio-chemomechanical model integrates these biomechanical and chemical processes, offering a comprehensive framework capable of predicting arterial responses to different PAH treatments. The model captures the dynamic interactions between hemodynamics, vascular geometry, and the pharmacological mechanisms underlying various therapies. By simulating these interactions, the model provides valuable insights into how different treatments impact arterial mechanics and can be used to guide personalized therapeutic strategies.
In conclusion, this integrated framework presents a promising tool for advancing personalized medicine in PAH management. By simulating both the mechanical and chemical responses of the pulmonary vasculature to various treatments, the model enhances our ability to predict patient-specific treatment outcomes. Moreover, it can be extended to explore other therapeutic pathways and vascular diseases, providing a versatile platform for future research into vascular remodeling and pharmacological interventions.
Department: Mechanical Engineering
Name: Amirreza Gandomkar Ghalhar
Date Time: Wednesday, November 6th, 2024 - 1:00 p.m.
Advisor: Dr. Patton Allison
This thesis presents a comprehensive study of liquid fuel flame topologies through the development and application of novel diagnostic techniques. The complexities associated with liquid fuel combustion, particularly in the context of aviation and aerospace applications, demand a deeper understanding of flame behavior and stability. Traditional diagnostic methods often fall short due to the intricate interactions between liquid droplets, flame surfaces, and multi-component fuel mixtures.
Our research focuses on addressing these challenges by introducing advanced diagnostic approaches to investigate the structure, stability, and extinction characteristics of liquid fuel flames. Key areas of exploration include the identification and analysis of reaction zones, the impact of vaporization dynamics, and the effects of turbulent flow conditions on flame
stabilization. To achieve this, we employ Laser-Induced Fluorescence (LIF) and chemiluminescence imaging, alongside advanced numerical image processing algorithms to capture high-resolution data on flame behavior. These methods enable us to discern fine details about flame front interactions, droplet vaporization, and localized extinction events. By refining these diagnostic tools, we aim to provide clearer insights into the parameters influencing flame stability, such as equivalence ratio, mixing efficiency, and preheat temperature.
In addition, the study integrates computational simulations using CHEMKIN to validate experimental results, allowing for a more comprehensive understanding of how liquid fuel combustion behaves under varying conditions of turbulence and strain rates. The combined experimental and computational approach ensures that the findings are both robust and applicable to real-world aerospace scenarios.
The findings of this study contribute to the broader understanding of liquid fuel combustion processes and offer valuable implications for the design and optimization of more efficient and stable combustion systems in aerospace applications. This research not only enhances our theoretical knowledge but also provides practical guidelines for improving flame diagnostics and combustion performance.
Department: Mechanical Engineering
Name: Corey Gamache
Date Time: Tuesday, October 22nd, 2024 - 10:00 a.m.
Advisor: Dr. Guoming Zhu
Turbocharged engines often suffer from significant intake manifold pressure response delay due to so-called turbo-lag. Many technologies have been investigated to combat this phenomenon, and combinations of them are often utilized together. The addition of these technologies to already complicated modern engines presents a significant control challenge due to significant system nonlinearity, especially when considering the large operating range of engine speeds and loads. In this dissertation a Ford 6.7L 8-cylinder diesel engine equipped with a variable geometry turbocharger (VGT) and exhaust gas recirculation (EGR) is additionally augmented with an external electric compressor, or eBoost, along with a bypass valve to mitigate turbo-lag without negatively impacting emissions. The air charge system has two control targets, intake manifold pressure and EGR rate.
First, a dual-output proportional-integral-derivative (PID) controller is proposed for controlling the boost pressure using both VGT and eBoost to reduce turbo-lag, and a transition logic is developed to detect transient operating conditions for activating the eBoost as well as closing and opening the bypass valve. The EGR rate PID control remains unchanged from the production control scheme. The addition of the eBoost is shown to experimentally improve transient response time by up to 55% and reduce transient NOx emissions by up to 42% during transitional operations without negatively impacting steady-state engine performance or emissions.
Second, a model-based control strategy is developed to illustrate the benefit of a modern coordinated control strategy as compared to the production-style PID control scheme. A multiple-input and multiple-output (MIMO) Linear Quadratic Tracking with Integral (LQTI) control strategy, along with its gain-scheduling logic, and transition logic, is developed for the diesel engine air charge system. Multiple model-based LQTI controllers were designed at different engine operational conditions based on the associated linearized models, and the control outputs are scheduled based upon the engine load condition and bypass valve position. The developed control strategy is validated in both simulation and experimental studies, and the experimental test results show a reduction in engine response time by up to 81.36% in terms of reaching target intake manifold pressure following a load step-up, compared with the production configuration without eBoost and bypass valve with no significant impact on NOx emissions. The LQTI strategy is additionally compared with the dual-output PID control strategy, and is shown to improve intake manifold response speed by up to 57%.
Finally, a model-based, gain-scheduling control strategy is developed utilizing a constrained H2 linear parameter-varying (LPV) control strategy. The nonlinear eBoost air charge system is modeled as a function of two scheduling parameters, engine load and bypass valve position, for this study, and three LPV controllers are designed for the defined operating range of these parameters. LPV controllers and a controller switching logic for implementation of the LPV control strategy onto the experimental setup are developed, and the transition logic previously developed for the LQTI strategy is adapted for use by the LPV system. The LPV control strategy is validated in simulation and experimental studies, and is shown to experimentally achieve a reduction in intake manifold response time of up to 84% compared to the production control strategy without eBoost following a load step-up, with no significant impact on NOx emissions. The LPV control strategy is also shown to improve intake manifold pressure response speed by up to 65% compared with the dual-output PID control strategy, and achieve close performance to the LQTI control strategy while providing additional benefits in terms of performance and stability guarantees and simplicity of implementation.
Department: Mechanical Engineering
Name: Duncan Kroll
Date Time: Tuesday, October 15th, 2024 - 10:00 a.m.
Advisor: Dr. Abraham Engeda and Dr. Nusair Hasan
Purification systems are necessary to support commissioning and operation of medium to large-scale cryogenic refrigeration systems using various cryogenic working fluids. The present research focuses on helium refrigeration systems that operate at 4.5 K (which is just above normal boiling point of helium), down to 1.8 K (which requires helium with vapor pressure of 16 mbar). At these very low temperatures, the presence of any substances except helium (contaminants) will result in solidification. Even trace amounts of these impurities in the process fluid can block and/or change the flow distribution in refrigerator’s heat exchangers and potentially damage rotating equipment operating at high speeds. Therefore, helium purifiers for these refrigerators are typically designed for a low level of impurity (i.e., 1-100 ppmv) removal of moisture and air components, since gross impurities are removed during the initial clean-up and commissioning of the system.
Purification of the process gas (helium) is typically achieved by molecular sieve adsorption beds at room temperature for moisture removal and liquid nitrogen (LN) cooled activated carbon adsorption bed for air (nitrogen/oxygen/argon) removal. However, past studies and operational experience show that molecular sieves are unable to remove low level moisture contamination effectively. Freeze-out purification has great potential to reliably remove low-level moisture contamination, but requires careful design. Typical commercially available freeze-out purifiers have a much shorter operating time in between regenerations than should be achievable, are not optimized for low pressure operation, and require large amount of utilities like liquid nitrogen. Furthermore, frost formation in a purifier heat exchanger is not well understood. Developing an understanding of this process and studying the design and process parameters that can improve the process for this critical sub-system is the focus of this research.
This work begins with an experimental study of a commercially available helium freeze-out purifier. It is tested under practical operating conditions and controlled operating conditions, under different contamination levels and flow capacity imbalances. Auxiliary equipment was designed, fabricated, tested, and operated to achieve controlled and tunable low-level moisture contamination in the helium stream. The performance and moisture capacity of the purifier heat exchanger was characterized. Following the experimental study, a series of theoretical studies were carried out. First, a heat and mass transfer model on an isothermal surface was developed to establish a base-level understanding of frost formation and relate to the existing literature. This model was used to study the effects of gas pressure, wall temperature difference, reduced temperature differential, absolute humidity, and carrier gas on the frost growth and mass transfer. A simplified estimation to predict frost thickness was developed and found to be accurate within 1%. Second, this model was extended to a heat exchanger surface. This model was validated using test data and used to study the effects of flow imbalance and inlet moisture contamination level. Through this study, it was found that flow mal-distribution within the heat exchanger caused significant rift between many of the experimental results and the simulation results. Third, in order to eliminate the effects of flow mal-distribution and reduce utility usage, a novel purifier design is studied. It considers a coiled finned-tube design to maximize surface area for heat exchange and mass collection. An initial exergy analysis was done to determine a reasonable reference design geometry. The effects of fin density and heat exchanger mandrel diameter on frost formation and heat exchanger performance were studied. It was found that the novel purifier can hold approximately as much frost as the commercially available purifier, while using significantly less nitrogen for cooling.
Department: Mechanical Engineering
Name: Mohamed Abdullah Alhaddad
Date Time: Friday, September 6th, 2024 - 10:30 a.m.
Advisor: Dr. Andre Benard
Modeling the rate of fluid penetration into capillaries due to surface tension forces is often based on the Poiseuille flow solution. However, this model does not apply to short capillaries due to non-fully developed conditions at the entrance and exit regions. Improved models are needed for small capillary systems, which are crucial in processes such as oil droplet removal from water using thin membranes. Previous research has addressed deviations from Poiseuille flow near the entrance and moving meniscus, including the use of momentum conservation equations and inertia forces in kinetic models for infinite flow entering capillary tubes. Some studies have considered finite reservoir infiltration, assuming parallel flow lines, but neglected local acceleration due to inertia and gravity effects. This study presents a novel analysis focusing on the dynamic behavior of droplets in pores. It models a finite flow reservoir associated with a droplet and includes drag forces at the capillary channel entrance. The mathematical model incorporates pressure losses due to sudden contraction and viscous dissipation at the tube entrance, which can be significant in low Reynolds number flows. Additionally, it considers energy dissipation due to contact angle hysteresis. The model addresses an apparent anomaly posed by Washburn-Rideal and Levin-Szekely, and is applied to various liquids including water, glycerin, blood, oil, and methanol. It is tested with different geometries and cases, including numerical simulations, showing close agreement with experimental literature. Deviations are observed when comparing infinite reservoir flow to finite droplet flow.
A parametric study evaluates the effects of dimensionless numbers such as capillary, Reynolds, Weber, and Froude numbers. Results suggest the Weber number's importance over the capillary number in droplet dynamics. The study also examines finite flow and film penetration in single pores versus pore networks. Computational simulations using ANSYS-FLUENT 23 R2 provide 2D results, using User Defined Functions (UDF) to capture liquid-gas interfaces. These simulations corroborate the mathematical model. Contrary to previous findings, this study demonstrates that contact angle effects are significant in the initial stages of capillary penetration. The proposed solution is valid for very short initial times, applicable to printing, lithographic operations, and filtration systems dealing with oil droplet removal from water using membranes.
Furthermore, the framework allowed us to examine two different approaches to delay lithium plating in graphite. A thermodynamic approach of hybrid anodes where we mix graphite with hard carbon and a kinetic approach of tunnels where we introduce synthetic channels in the electrode. Through our simulations, we identify that hard carbon particles act as a buffer for lithiation in hybrid anodes, delaying the surface saturation of graphite particles and thus delaying the lithium plating on graphite. On the other hand, creating tunnels generates easier paths for ion diffusion and therefore leads to better utilization of the electrode. Such channels in thick electrodes can generate high-capacity and efficient electrodes. Finally, the development of this framework culminates with a demonstration of full-cell simulations. In summary, simulating electrochemical processes in complex electrode microstructures is streamlined by the presented framework and offers a fast and robust tool for designing and studying microstructures.
Department: Mechanical Engineering
Name: Igor Igorevich Bezsonov
Date Time: Tuesday, August 27th, 2024 - 10:00 a.m.
Advisor: Dr. Siva Nadimpalli
Modern technology, from portable electronics to electric vehicles, is becoming increasingly reliant on lithium-ion (Li-ion) batteries for energy storage. This chemistry possesses a desirable combination of high power and high energy densities and is therefore widely used, but safety is still a significant issue. The risk of thermal runaway (TR) is a major roadblock to the widespread use of this technology. TR is a self-sustaining exothermic reaction which can be triggered by mechanical or electrical damage to a cell, overheating, or by latent defects from manufacturing. The volumetric changes within a cell’s electrodes and internal gas generation can be detected by strain measurements on the surface of the casing, which can complement the electrical and thermal data used by battery management systems (BMS) and even provide insight into the state of a battery when electrical contact has been lost. This research project demonstrates the utility of strain measurements to detect abnormal Li-ion cell behavior and precursors to TR. First, a baseline was established to identify the strain response of Li-ion cells under normal operating conditions, accounting for temperature and cycling rate (or C-rate) effects. Then, the cells were cycled under abuse conditions to identify signs of damage and identify signs of TR onset through strain measurements. The final step was to develop a model which used fundamental data and electrochemical input to predict the mechanical behavior of individual electrodes and full 18650 cells.
The samples used in this research were commercial 18650 format (18 mm in diameter, 65 mm tall) cylindrical cells with graphite-silicon anodes and nickel cobalt aluminum oxide (NCA) cathodes. Strain data was collected using strain gages bonded to the cell casing and was used to characterize their mechanical behavior during both normal and abuse cycling conditions. During a charge-discharge cycle at normal conditions, the surface strain was found to be nearly reversible – that is, the strain states at the beginning of charge and the end of discharge were almost the same. The strain profile of the cells was analyzed and found to be directly related to electrochemical reactions occurring within the electrodes, as evidenced by dQ/dV and dε/dV plots. The fact that the dε/dV peaks coincide with – and sometimes precede – the peaks in the dQ/dV plots shows that the electrochemical reactions occurring within the electrodes during charge and discharge can be sensed through strain measurements on the surfaces of cell casings.
With the baseline established, cells were then subjected to several abuse scenarios. During the first abuse scenario cells were overcharged to failure, which came in the form of current interrupt device (CID) activation. During overcharge (past 4.2V) the cell potential was seen to increase quickly and reached a plateau at approximately 5V, shortly after which the CID activated, and the cell became electrically inaccessible (0V). The cells’ surface strain also increased dramatically during this abuse scenario, reaching a value that was more than double the peak strain during normal cycling. The CID-activated cells were then heated to TR, during which two events were identified from the strain signature as signs/precursors to TR which could be used for prediction and prevention purposes. Cells were also repeatedly overcharged to 105% and 110% nominal capacity, named 5% and 10% overcharge (OC), respectively. Maximum strain, potential, and temperature were seen to increase slowly during the 5% OC experiments, and quickly during 10% OC, during which the CID activated after an average of 11 cycles. Strain at full discharge (referred to as residual strain) reached a progressively higher value after each OC cycle and was found to closely correlate to the pressure needed to activate the CID. Electrochemical impedance spectroscopy, dQ/dV, and dε/dV analyses confirmed that the degradation modes present were mostly caused by loss of lithium inventory processes. The insights gained from stress measurement, including the ability to predict CID activation, are discussed.
A finite element analysis modeling approach to predict the mechanical behavior of individual electrodes and full cells was developed. Electrochemistry was solved in COMSOL Multiphysics using a pseudo 4-dimensional (P4D) model to predict the cell potential and the state of charge of the active material within electrodes. Mechanics were coupled to electrochemistry through volumetric changes of the active material and a thermal strain analogy. The effective mechanical properties of the electrodes were calculated using the Mori-Tanaka homogenization scheme, with the development and assumptions explained fully in this work. The homogenized properties were compared to experimental and published results and were found to be in good agreement. Simulations for stress in graphite anode and nickel manganese cobalt oxide cathode were in agreement with published data. Predictions were also made for graphite-silicon anodes and NMC cathode and a geometry representative of an 18650 format battery. The limitations and future improvements for this model are discussed.
Department: Mechanical Engineering
Name: Aaron Feinauer
Date Time: Tuesday, August 20th, 2024 - 11:00 a.m.
Advisor: Dr. Andrew Benard
Extreme temperature exchangers capable of operating between 800°C and 1100°C and pressures greater than 80 bar are considered a critical component for ultra-high efficiency power generation and a range of next-generation industrial processes. A promising application for this research thrust is the use of carbon dioxide as a working fluid, whose critical point is at 73.8 bar and 31°C. As compared to traditional steam or air-based power cycles, a supercritical carbon dioxide (sCO2) cycle has less compression work near the critical point and higher cycle efficiencies which enables a smaller plant footprint. Given the extreme temperatures and pressures required for heat exchange, however, this poses a significant materials and system design challenge. This research seeks to develop an efficient and cost-effective test facility to enable the rapid testing and verification of heat exchangers within this temperature and pressure range while utilizing nitrogen as a surrogate fluid for carbon dioxide. A bench-scale test facility was first developed for moderate temperatures and pressures (100°C, 100 psi) for the purpose of developing friction factor and Nusselt number correlations for twisted S-shaped fins and for validating computational fluid dynamics (CFD) models of various fin configurations. A polyimide thermofoil heater was compressed between a mirrored system of additively manufactured heat exchanger plates fitted into a set of aluminum headers. A set of flat aluminum plates were used to compare against the twisted S-shaped finned plates made from titanium. Compared to other results within the literature, the correlations developed here for flat plates and finned surfaces are enhanced by the inlet impingement and outlet transition effects. The friction factor is up to 20.1 times larger for the flat plate correlations while the twisted S-shaped fins are up to 7.2 times greater than the literature would suggest. For the Nusselt number correlations the flat plate correlation are 6 times larger while the twisted S-shaped fins are up to 2.5 times larger than the literature would suggest. As compared to the experimental results, the CFD errors for friction factor are within -21.63% for the flat plate and -16.74% for the twisted S-fins. The maximum error in the Nusselt number for the flat plate is within +20.87%, while the twisted S-shaped fins have a maximum error on the order of -54.14%. The differences here between experiment and CFD are attributable to contact resistance effects between the heater and plate surfaces and the roughness of the printed fins. A 5 kW test facility was developed for heat exchanger characterization capable of operating at 250 bar, 300°C on the cold side and 80 bar, 1100°C on the hot side. The primary research within this work related to this facility is the development of process heat at high flow rates with a high inlet temperature, the management of the high temperature throttling process between the cold side and the hot side, and the optimization of the headers for integration with the heat exchanger. The development of process heat was achieved by a U-shaped graphite heating element with internal hexagonal channels that allow for prediction of heat transfer properties. Self-cooled nickel 200 alloy conductors are used which allow for the extreme inlet temperatures expected in sCO2 recuperative flows. The inlet conditions to the heater were as high as 450°C due to losses while the outlet flow was generally limited to less than 1100°C for the duration of the experiments at 80 bar. A thick sharp-edged orifice plate was used for high temperature compressible flow control at 7 g/s of N2 from 250 bar to 80 bar. A subset of research here attempted to develop the compressibility factors required for determining the flow rate and pressure drop relationship within a range of orifice diameters from 0.50 mm to 0.70 mm. Finally, a set of headers were developed with internal cooling channels and temperature monitoring to accommodate the extreme temperature and pressure conditions seen within the heat exchanger. A careful energy balance was performed to determine the best approach for optimizing the design and mitigating heat losses for more accurate heat exchanger characterization in future iterations of the design.
Department: Mechanical Engineering
Name: Muhammad Rubayat Bin Shahadat
Date Time: Monday, August 12th, 2024 - 3:00 p.m.
Advisor: Dr. Farhad Jaberi
Direct Numerical Simulations (DNS) of a spatially developing supersonic turbulent shear layer are conducted for a range of convective Mach numbers (Mc), velocity parameters (λ), and density Atwood numbers (A) to examine the effects of compressibility, advection and multi-fluid global density variation on the growth rate, self-similarity, flow statistics, asymmetry, and entrainment of the layer. At distant downstream locations, self-similarity is attained for all the examined cases. The self-similar region is identified by the collapse of normalized mean streamwise velocity, the constant peak of normalized Reynolds stresses, and the linear growth rate of the shear layer thickness as well as momentum thickness. Despite significant variations in the lower-order and higher-order statistics across different convective Mach numbers, velocity parameters, and density Atwood numbers, the profiles collapse within the self-similar region using our proposed self-
similar scaling. It is demonstrated that the observed numerical trends and profiles are consistent with the literature and can be explained via compressible self-similar equations and models.
The self-similar forms of continuity, streamwise momentum, transverse momentum, and energy equations have been formulated, incorporating both compressibility and centerline shifts. The self-similar normalized density distribution inside the layer is used to explain the effects of compressibility on various flow statistics including the far-field cross-stream velocity. The density variation is linked to dissipation effects as revealed by our analysis of the self-similar energy equation. An approximate equation for the cross-stream velocity is developed and the profiles of cross-stream velocity obtained from this equation are compared with the DNS results. A geometric interpretation of the entrainment ratio is presented and the approximate equation for the cross-stream velocity is used to provide the general expression of the entrainment ratio. The entrainment ratio increases with convective Mach numbers and velocity parameters, favoring excess entrainment on the high-speed side. Introducing global density variation in the multi-fluid flow enhances the layer asymmetry as compared to the single-fluid shear layer, meaning that the shear layer centerline and the peak of Reynolds stresses shift more towards the lower momentum side. Apart from enhanced asymmetry, the increase in global density variation causes more reduction in shear layer growth rate. A comparative study of the effects of compressibility and global density change on flow variables like mean density or cross-stream velocity reveals some of the interesting features of the simulated compressible multi-fluid shear layer. Despite significant differences in the lower and higher order statistics at different density Atwood numbers, the mean flow profiles collapse within the self-similar zone using our suggested self-similar scaling. A geometric interpretation of the entrainment ratio is presented, which helps to explain the decrease in the entrainment ratio with increasing Atwood numbers.
Department: Mechanical Engineering
Name: Anirudh Suresh
Date Time: Wednesday, July 24th, 2024 - 2:00 p.m.
Advisor: Dr. Kalyanmoy Deb
The typical aim of a multi-objective evolutionary algorithm (MOEA) is to identify a well-converged and uniformly distributed set of Pareto optimal (PO) solutions. This step is followed by a multi-criterion decision-making (MCDM) step where the decision-maker (DM) must select a desired solution for further consideration. We propose methods for the convenient execution of the above two steps. We present and compare several unique identifiers for PO solutions with respect to their properties and advantages and disadvantages in optimization, visualization, and decision-making. We propose methods to achieve a superior distribution of solutions in these spaces and demonstrate that a combination of these identifiers can be used during optimization. A well-represented set of PO solutions cannot be guaranteed at the end of optimization, and an incomplete PO front can be problematic for decision-making. We propose a machine learning assisted MCDM framework that can alleviate some of these issues. We also propose integrating these MCDM concepts into optimization to induce confidence in the achieved PO solutions.
Department: Mechanical Engineering
Name: Sai Guruprasad Jakkala
Date Time: Friday, June 28th, 2024 - 1:00 p.m.
Advisor: Dr. Andre Benard and Dr. S Vengadesan
A majority of equipment used in industry operate in the turbulent flow regime. Design of these equipment requires many iterations, often performed using computer simulations. Turbulence modelling is computationally expensive and time-consuming. In this study we investigate different turbulence models and their application in designing cyclone separators and novel plate heat exchangers. The performance of the various models are studied and the simulations are used to provide insight and guidance on the redesign of these two important systems. Hydrocyclones and heat exchangers are ubiquitous in industry.
A good understanding of the flow features in cyclone separators is paramount to efficiently use them. The turbulent fluid flow characteristics are modeled using URANS, Large Eddy Simulations (LES), and hybrid LES/Reynolds averaged Navier–Stokes (RANS) turbulent models. The hybrid LES/RANS approaches, namely, detached eddy simulation (DES), delayed detached eddy simulation (DDES), and improved delayed detached eddy simulation (IDDES) based on the k-omega SST RANS approaches are explored. The study is carried out for three different inlet velocities. The results from hybrid LES/RANS models are shown to be in good agreement with the experimental data available in the literature. Reduction in computational time and mesh size are the two main benefits of using hybrid LES/RANS models over the traditional LES methods. The Reynolds stresses are observed to understand the redistribution of turbulent energy in the flow field. The velocity profiles and vorticity quantities are explored to obtain a better understanding of the behavior of fluid flow in cyclone separators. The better prediction of turbulent quantities from the hybrid models can help in better modeling the multiphase interactions. Using the improved turbulent quantity predictions, we are able to design a cyclone separator for reduced erosion.
Supercritical CO2 cycles operating with high efficiency require new heat exchangers which can operate at high temperature (above 800oC) and high pressure (above 80 bar) with tens of thousands of hours of operation. In this thesis, we discuss modified metallic plate heat exchangers which can withstand high temperature and high pressure with new twisted S-shaped fins. Novel 3D twisted S-shaped fins are developed for better heat exchanger performance. The fins have a twist to induce a swirl in the flow resulting in enhanced heat transfer. Ni-based superalloy Haynes 214 is the material used for the heat exchanger plates and fins. The heat exchanger is manufactured using additive manufacturing processes. Turbulent Conjugate Heat Transfer simulations are carried out to obtain the temperature and pressure profiles in the heat exchanger in the turbulent regime. A parametric study is conducted to determine the performance of the newly developed 3D twisted S-shaped fins. The CFD results are compared with experiments.
The studies in this thesis resulted in an improved cyclone separator design which has improved operating life due to reduced erosion (maximum of 90%) without much compromise on the efficiency. 3D twisted S-shaped fins provide a better Performance Efficient Coefficient (PEC) than S-shaped fins. There is an improvement of 10%-13% better performance. There is considerable reduction (up to 75%) in the pumping requirement for 3D twisted S-shaped fins.
Department: Mechanical Engineering
Name: Michael Hayes
Date Time: Monday, April 29th, 2024 - 2:00 p.m.
Advisor: Dr. André Benard
The intermittency of renewable energy sources necessitates storage technologies that can help to provide consistent output on-demand. A promising area of research is thermochemical energy storage (TCES), which utilizes high-temperature chemical reactions to absorb and release heat. While promising, TCES technologies often rely on storing chemically charged materials at high temperatures, complicating handling and posing serious challenges to long-duration storage. A pioneering approach known as SoFuel (solid state solar thermochemical fuel) proposed using counterflowing solid and gas streams in a particle-based moving-bed reactor to achieve heat recuperation and allow flows to enter and exit the reactor at ambient temperatures. Previous work has successfully demonstrated operation of a reduction (charging) reactor based on this concept; this dissertation describes the development of a companion oxidation (discharging) reactor.
The countercurrent, tubular, moving bed oxidation setup permits solids to enter and exit at ambient temperatures, but the system also features a separate extraction port in the middle of the reactor for producing high-temperature process gas. A bench-scale experimental apparatus was fabricated for use with 5 mm particles comprised of a 1:1 molar ratio of MgO to MnO, a redox material that exhibits high oxidation temperatures (around 1000° C) and excellent cyclic stability. The experimental reactor system successfully demonstrated self-sustaining thermochemical oxidation at temperatures exceeding 1000° C. Many trials achieved largely steady operation, showcasing excellent operational stability during hours-long experiments. With the aid of user-manipulated inputs, the reactor produced extraction temperatures in excess of 950° C and demonstrated efficiencies as high as 41.3%. An extensive experimental campaign revealed thermal runaway in the upper reaches of the particle bed as a risk to safe, stable reactor operation.
To better understand reactor dynamics and evaluate potential control schemes, a three phase, one-dimensional finite-volume computational model was developed. The model successfully emulated behavior from the on-reactor experiments and further illustrated the impacts of the three system inputs - solid flow rate, gas extraction flow rate, and gas recuperation flow rate - on overall behavior. A five-zone adaptive model predictive controller (MPC) was developed using a linearized control-volume model as its basis. The controller sought to regulate the size, temperature, and position of the chemically reacting region of the particle bed through several novel approaches. These approaches were tuned and refined iteratively using the 1D computational model, after which they were successfully deployed on the experimental setup. Future work concerns scaling up the oxidation system for larger rates of energy extraction, further analysis of optimal reactor startup procedures, and alternative controller formulations.
Department: Mechanical Engineering
Name: Anshul Tomar
Date Time: Friday, April 26th, 2024 - 11:00 a.m.
Advisor: Dr. Ranjan Mukherjee
Bernoulli pads can create a significant normal force on an object without contact, which allows them to be traditionally used for non-contact pick-and-place operations in industry. In addition to the normal force, the pad produces shear forces, which can be utilized in cleaning a workpiece without contact. The motivation for the present work is to understand the flow physics of the Bernoulli pad such that they can be employed for non-contact biofouling mitigation of ship hulls. Numerical investigations have shown that the shear stress distribution generated by the action of the Bernoulli pad on the workpiece is concentrated and results in maximum shear stress very close to the neck of the pad. The maximum value of wall shear stress is an important metric for determining the cleaning efficacy of the Bernoulli pad. We use numerical simulations over a range of parameter space to develop a relationship between the inlet fluid power and the maximum shear stress obtained on the workpiece. To increase the shear force distribution, we explore the possibility of adding mechanical power to the system in addition to the fluid power. The flow field between the Bernoulli pad and the workpiece involves a transition from laminar to turbulent flow and a recirculation region. The maximum shear stress occurs in the vicinity of the recirculation region and to gain confidence in the numerical solver's ability to estimate these stresses accurately, experiments were conducted with a hot-film sensor.
A direct relationship was obtained between the maximum shear stress on the workpiece and inlet fluid power using dimensional analysis. A relationship between the maximum shear stress and the inlet Reynolds number is also obtained, and implications of these scaling relationships are studied. A direct relationship between the inlet fluid power and the shear losses motivates us to explore other methods of providing power to the system with the objective of increasing shear forces and thereby improving cleaning efficacy. We numerically investigate a Bernoulli pad in which additional mechanical power is added by rotating the pad. This additional power increases both the normal and shear forces on the workpiece for the same inlet fluid power. In the context of the rotating Bernoulli pad, it was found that for a given normal attractive force, a stable equilibrium configuration can exist for two different mass flow rates, with the higher mass flow rate resulting in a higher stiffness of the flow field. This phenomenon has not been reported in the literature. The shear stress distribution, obtained using numerical simulations, is validated using experiments for the first time. A constant temperature anemometer is used with a hot-film sensor and water as the working fluid; the sensor is calibrated using a fully developed channel flow. An experimental setup is designed to calibrate and later measure the wall shear stress in a Bernoulli pad assembly. The maximum wall shear stress is observed very close to the neck of the pad due toflow constriction and separation; the hot-film experiments accurately capture the magnitude of the maximum shear stress and its location. This provides us with confidence in the numerical solver, which can be used to optimize the Bernoulli pad design to improve its cleaning efficacy.
Department: Mechanical Engineering
Name: Saima Alam
Date Time: Monday, April 23th, 2024 - 10:00 a.m.
Advisor: Dr. Norbert Mueller
Air-conditioning systems consume significant portions of energy in an automotive system, hence any improvement in performance or efficiency of automotive air-conditioning systems contribute to the energy efficiency and design economy of the vehicle. There has been massive research interest in improving the design of individual components of HVAC systems for efficiency and many of these improvements have already been implemented. However, due to the non-linear and dynamic nature of automotive air-conditioning and cooling systems, there is still room for improving the efficiency of the integrated unit by improving the control strategy for such systems instead of focusing on individual components alone.
With the advancement in machine learning and programming capabilities there are now various novel control strategies and algorithms for non-linear systems in general. To apply these algorithms, black box models of the specific air-conditioning system are used from elaborate experimental data. Despite generating optimized control parameters, these methods provide little insight to the inner dynamics of the system and how they impact system behavior. For this reason, robust physics based dynamic model of automotive air-conditioning systems is required to formulate improved control strategies.
The goal of this research is to develop a transient model of the automotive heat pump system for cabin space conditioning including the non-static time delay features of the thermal expansion valve used as expansion device. A modular trans-critical vapor compression system built at MSU sponsored by Ford was developed to run with sub-critical refrigerants for experimental validation of the model and system identification tests. From the understanding of the thermal expansion valve dynamics a method was developed to control an electronic expansion valve to perform exactly like or better than the specimen thermal expansion valve in the system. The heat pump cycle simulation model results matched with experimental results with an acceptable error margin and the system coefficient of performance with the developed controller strategy for the electronic expansion valve was found equivalent of the cycle with the specimen thermostatic expansion valve. This work will enable easy conversion from TXV to EXV systems by recommending hardware features and control parameters for similar performance level in automotive systems. Furthermore, generalized transfer functions of the components were developed for analysis and recommendation of improved control strategy in automotive air-conditioning systems using thermal and electronic expansion valves.
Department: Mechanical Engineering
Name: Bryce Thelen
Date Time: Monday, April 15th, 2024 - 10:00 a.m.
Advisor: Dr. Elisa Toulson
Research into technologies directed towards the improvement in the efficiency of the internal combustion engine has been motivated over the past several years by the regulation of the United States automotive market to more stringent standards for fuel economy and emissions. Lean burn operation of spark-ignited (SI) internal combustion engines may have the potential to help meet the high fuel economy goals of the future decade by improving the efficiency of SI engines at partial loads. Although gains in efficiency are found for engines operating with diluted mixtures, these mixtures present difficulties that manifest themselves through the slow flame speeds and poor ignitability associated with lean or diluted air-fuel mixtures. Two types of ignition systems are examined here that attempt to mitigate these negative effects. They are a radio-frequency plasma-enhanced ignition system and a prechamber initiated ignition system called Turbulent Jet Ignition.
First, the effects of a plasma-enhanced ignition system on the performance of a small, single-cylinder, four-stroke gasoline engine are examined. Dynamometer testing of the 33.5 cm3 engine at various operating speeds was performed with both the engine’s stock coil ignition system and a radio frequency plasma ignition system. The radio frequency system is designed to provide a quasi-non-equilibrium plasma discharge that features a high-voltage pulsar that provides 400 mJ of energy for each discharge and voltages of up to 30 kV. Tests show improvement of the engine’s combustion stability at all operating conditions and the extension of the engine’s lean flammability limit with the radio frequency system. Particular attention is given to the improvements that the radio frequency system provides while burning lean air-fuel mixtures. Additionally, gas analysis of the 33.5 cm3 engine’s exhaust and high-speed images of the radio frequency system taken in a separate 0.4 liter optical engine are also presented.
Second, fully three-dimensional computational fluid dynamic simulations with detailed chemistry of a single-orifice turbulent jet ignition device installed in a rapid compression machine are presented. The simulations were performed using the computational fluid dynamics software CONVERGE and its RANS turbulence models. Simulations of propane fueled combustion are compared to data collected in the optically accessible rapid compression machine that the model’s geometry is based on to establish the validity and limitations of the simulations and to compare the behavior of the different air-fuel ratios that are used in the simulations. In addition to being compared to a companion experimental study, investigations into the effect of TJI orifice size and prechamber spark location are performed. The data generated in the simulations is analyzed and insights into the processes that make up the operation of the TJI are given. Finally, CFD analysis tools are applied to the early development and design of a TJI system intended for a heavy-duty diesel engine being converted to run on natural gas.
Department: Mechanical Engineering
Name: Philipp Schimmels
Date Time: Friday, April 5th, 2024 - 1:00 p.m.
Advisor: Dr. Andre Benard
Large-scale storage of renewable energy is necessary to increase reliability of this intermittently, but abundantly available resource. Of special concern is the storage of energy and its subsequent use in industrial processes requiring high temperature heat. A promising emerging technology is based on using redox reactions of metal oxides at high temperatures. The shelf-stable redox material MgMnO was identified as a potential candidate due to its high energy density, cyclic stability, high reaction temperature and good scalability. This work describes the conception, design, manufacturing, testing and improvement of a solid fuel reduction reactor used to charge the energy storage material MgMnO. The reactor enables continuous charging of the pelletized material via a packed bed moving through a 1500°C furnace. A counter-currently flowing sweep gas is used to separate the released oxygen from the charged material to prevent re-oxidation. It also acts as a heat recuperation carrier that cools charged particles and pre-heats particles before entering the reaction zone. This approach enables high thermal efficiency as the sensible heat is almost entirely recovered. A lab-scale reactor was built and tested successfully. Challenges such as particle flowability at high temperatures, fluidization of the bed, and low extent of reaction were encountered and solved by managing the counter-flowing gas and increasing the residence time of the particles in the reactor. The reactor output reached a maximum of 2500 W of charged chemical potential. Several models were developed and used to design experiments and validate the performance of the system. High energetic cost for separation of oxygen and sweep gas nitrogen was identified as a roadblock to improved efficiencies and potential scale-up of the system. This led to mathematical and experimental investigation of using water vapor as alternative sweep gas. Results show that water vapor is superior to nitrogen as a reducing agent and has a lower energetic cost of production. The proposed reactor can be scaled up and results of this study indicates that using the pelletized MgMnO pelletized material offers thermochemical energy storage at low-cost. The extraction of this energy at high temperature offers a path toward the decarbonization of a variety of industrial processes that are currently relying on the combustion of hydrocarbon fuels for high-grade heat.
Department: Mechanical Engineering
Name: Ru Tao
Date Time: Monday, April 1th, 2024 - 2:30 p.m.
Advisor: Dr. Michele Grimm
Vaginal childbirth, also known as delivery or labor, is the ending phase of pregnancy where one or more fetuses pass through the birth canal from the uterus, which is a biomechanical process. However, the risky process can cause significant injuries to both the fetus and the mother, such as brachial plexus injury, pelvic floor disorders, or even death. Due to technical and ethical reasons, experiments are difficult to conduct on laboring women and their fetuses. The use of computer modeling has become a very promising and rapidly growing way to perform research to improve our knowledge of the biomechanical processes of labor and delivery. The developed simulation models in this field have either focused on the uterine active contraction or the pelvic floor muscles, individually. In addition, there are many limitations existing in the current uterus models.
The goal of the project is to develop an integrated model system including the uterus, the fetus, the pelvic bones, and the pelvic muscle floor, which will allow advanced simulation and investigation within the field of biomechanics of fetal delivery. For the first step, a computational model in LS-DYNA simulating the active contraction behaviors of muscle tissue was developed, where the muscle tissue was composed of active contractile fibers using the Hill material model and the passive portion using elastic and hyperelastic material models. The model was further validated with experimental results, which demonstrated the accuracy and reliability of the modeling methodology to describe a muscle’s active contraction and relaxation behaviors. Second, a simulation model of a whole uterus during the second stage of labor was developed, which included active contractile fibers and a passive muscle tissue wall. The effects of the fiber distribution on uterine contraction behaviors were investigated and the delivery of a fetus moving through the uterus due to the contraction was simulated. The developed uterus model included several important uterine mechanical properties, such as the propagation of the contraction wave, the anisotropy of the fiber distribution, contraction intensity variation within the uterus, and the pushing effect on the fetus. Finally, an integrated model system of labor was established by incorporating the pelvic structures with the uterus and fetus models. The model system successfully delivered the fetus from the uterus and through the birth canal. The simulation results were validated based on available data and clinically observed phenomena, such as stress distribution within the uterus, values of Von Mises stress and principal stress of the pelvic floor muscles, rotation and movement of the fetus. Overall, a Finite Element Method model system simulating the labor process was developed in LS-DYNA, which will be used to investigate disorders related to labor, such as neonatal brachial plexus injury and maternal pelvic floor muscle injuries.
Department: Mechanical Engineering
Name: Eli Broemer
Date Time: Monday, April 1th, 2024 - 11:30 a.m.
Advisor: Dr. Sara Roccabianca
Bladder health and dysfunction is not well understood. Research with mouse models is an effective way to study soft tissue/organ function especially with the genetic tools available in this species. Despite this advantage, bladder research in mice is still lacking compared to other animal models. Particularly, mechanical testing/analysis of the mouse bladder tissue are near non-existent in literature. In this dissertation, experimental ex vivo pressurization of whole mouse bladders was used to analyze the mechanical stresses and stretches in the soft tissue. Bladder filling cycles were digitally reconstructed in 4D. The reconstructions were used to characterize the geometry and mechanics of the bladder as it fills. This work contributes to the bladder mechanics literature as this level of 4D and mechanical analysis of bladder filling in a mouse model has not been shown before.
Department: Mechanical Engineering
Name: Jonathon Winslow Howard
Date Time: Thursday, March 21th, 2024 - 12:00 p.m.
Advisor: Dr. Abraham Engeda
Operation of helium cryogenic systems below the normal boiling point of helium (approximately 4.2 K) has become a common need for modern high-energy particle accelerators. Nominal cooling near 2 K (or a corresponding saturation pressure of approximately 30 mbar) is often required by superconducting radio-frequency niobium resonators (also known as SRF cavities) to achieve the performance targets of the particle accelerator. To establish this cooling temperature, the cryogenic vessel (or cryostat) containing the SRF cavities is operated at the sub-atmospheric saturation pressure by continuously evacuating the vapor from the liquid helium bath. Multi-stage cryogenic centrifugal compressors (‘cold-compressors’) have been proven to be an efficient, reliable, and cost-effective method to achieve sub-atmospheric cryogenic operating conditions for large-scale systems. These compressors re-pressurize the sub-atmospheric cryogenic helium to just above atmospheric conditions before injecting the flow back into the main helium refrigerator. Although multi-stage cryogenic centrifugal compressor technology has been implemented in large-scale cryogenic systems since the 1980’s, theoretical understanding of their operation (steady-state and transient) is inadequate to provide a general characterization of the compressor and establish stable wide-range performance. The focus of this dissertation is two-fold regarding multi-stage centrifugal compressors as used for sub-atmospheric helium cryogenic systems. First, to develop a reliable performance prediction model for a multi-stage cryogenic centrifugal compressor train, validated with measurements from an actual operating system. Capabilities of the model include steady-state performance estimation and prediction of operational envelops that ensure stable and wide-range steady-state operation. Second, to develop and validate a process model of the entire sub-atmospheric system (e.g. FRIB) and establish a simple methodology to obtain a reliable thermodynamic path for the transient (‘pump-down’) process of reducing the helium bath pressure from above 1 bar to the operational steady-state conditions near 30 mbar. The effectiveness of the developed methodology is demonstrated by comparing the estimated and measured process parameters from the sub-atmospheric system studied (i.e. FRIB). The developed model and methodology are intended to benefit the design and operation (both steady-state and transient) of multi-stage cryogenic centrifugal compressor trains used in large-scale cryogenic helium refrigeration systems.
Department: Mechanical Engineering
Name: Md Sarower Hossain Tareq
Date Time: Tuesday, January 16th, 2024 - 11:00 a.m.
Advisor: Dr. Patrick Kwon and Dr. Haseung Chung
Nitinol is highly attractive for biomedical applications because of its unique shape memory and superelastic properties as well as acceptable biocompatibility. Additive manufacturing (AM) is getting significant attention in making complex and patient customizable nitinol devices. However, due to its high microstructural and compositional sensitivities, it is still challenging to fabricate functional NiTi devices via AM. It has been widely reported that evaporation of Ni, oxidation of Ti and formation of precipitation phases during fabrication significantly diverts the expected functional properties. To this date, laser powder bed fusion (LPBF) was the choice among many AM techniques to fabricate NiTi devices but successfully fabricated only on a NiTi substrate because of its poor bonding to other substrates (i.e., steel and Ti). In this work, a multi-step printing approach was systematically developed, which enabled printing NiTi on a Ti substrate using a very low laser energy density of 35 J/mm3 without any visible defect. This printing method reduced the high warping issue due to the process induced residual stress , avoided the Ni evaporation issue as well as formation of undesirable precipitation phase during printing. It was also found that a higher oxygen level in the printing chamber reduced the austenite finish (Af) temperature and negatively affected the printability. These results showed the feasibility of LPBF in printing NiTi on a substrate other than nitinol, providing a possible route to reduce the cost of NiTi fabrication via AM.
The as-printed NiTi sample exhibited distinct one-step phase transformation with the Af temperature of 2.1°C. To increase the Af temperature to 30.2°C (within the recommended range of Af temperature for biomedical applications), a heat treatment protocol was developed, which includes a solution cycle (at 900 °C for 1 hour) followed by an aging cycle (at 450°C for 30 minutes). The heat treatment protocol enabled to attain the homogenized microstructure while creating ultrafine metastable Ni-rich precipitate, Ni4Ti3, which facilitated the desirable phase transformation behavior with the increased Af temperature. The heat-treated sample showed narrower and sharper two-step martensitic phase transformation with the formation of intermediate R-phase. The presence of both Ni4Ti3 and the R-phase was confirmed by the transmission electron microscopic (TEM) analysis. In the superelasticity test at the body temperature, these samples, starting from the 2nd cycle, demonstrated a recovery ratio of more than 90% and a recoverable strain of more than 6.5%. After 10th cycles, the stable recoverable strain was 6.52% with a recovery ratio of 96%, which is the highest superelasticity reported for the LPBF processed NiTi to the best of our knowledge. After the initial deformation process, we expect these samples to attain near full superelasticity during service. The micro-hardness study also showed that the hardness of the heat-treated samples is less affected by the cyclic loading.
Nitinol stent is attractive since they are self-expandible and behave superelastically when deployed inside the body. In contrast to the multi-step conventional manufacturing route, AM is attractive in making nitinol stent since it provides one-step processing as well as wide option for customizable design. However, the individual strut of a stent is less than 150 µm which is very challenging to fabricate by LPBF with structural accuracy, mechanical integrity and maintaining proper superelasticity. In this work, the LPBF processing parameter as well as post surface finish has been systematically developed to minimize the porosity, avoid structural failure during deformation and maximize the superelastic property at body temperature. Finally, the processed thin strut showed the Af temperature of 26 °C (which is less than the body temperature) and demonstrated 91% strain recovery with 4.1% recoverable strain at body temperature.
The work presents an important roadmap in making NiTi devices by AM while maintaining excellent functional properties of NiTi for biomedical applications.
Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Mechanical Engineering at 355-5131 at least one day prior to the seminar; requests received after this date will be met when possible.