Research Theses

Permanent URI for this collection

This collection contains Ph.D. and Masters (Research) theses from University of Adelaide postgraduate students.

Note that in many cases the full content of the thesis is not available here; instead we have scanned the title page, contents pages and abstract from each thesis and made that available as a PDF file. In 2015, the Library began a project to digitise and make all research theses available online. We expect new theses to be deposited here in complete form. Older theses may be included here on request from the author.

Theses listed here will also be included in the National Library of Australia's Trove service.

Browse

Recent Submissions

Now showing 1 - 20 of 12577
  • ItemRestricted
    Glycaemic consequences of non-nutritive sweeteners in human health and type 2 diabtes
    (2024) Rose, Braden David; Young, Richard L.; Ivey, Kerry (Harvard Medical School); Adelaide Medical School
    Prospective epidemiological studies have consistently shown that high habitual consumption of non-nutritive sweeteners (NNS) is positively associated with an increased risk of type 2 diabetes (T2D). However, mechanisms that underlie this association are not well defined. All sweet stimuli, including NNS, are detected by a single broadly-tuned sweet taste receptor (STR) located on taste-cells of the tongue, as well as in a range of extra-oral tissues and organs, including enteroendocrine cells of the small intestine. Activation of these intestinal STRs leads to the release of gut hormones, including glucose-dependent insulinotropic polypeptide (GIP) from K-cells and glucagon-like peptide-1 (GLP-1) and -2 (GLP-2) from L-cells. GIP and GLP-1 augment postprandial insulin release, while GLP-1 also slows gastric emptying, to control postprandial glycaemia; GLP-2 increases expression and function of the primary intestinal apical glucose transporter, sodium-glucose cotransporter-1 (SGLT-1), to augment glucose absorption. Recent randomised controlled trials have added support that dietary NNS activation of the STR-SGLT-1 axis to augment the rate of intestinal glucose absorption, and NNS perturbation of gut microbiome composition and function, can both evoke dysglycaemia in healthy individuals. However, the precise contribution of each mechanism is unclear. Moreover, the effects of high habitual NNS consumption on glycaemia in individuals with T2D remains poorly defined. Given that individuals with T2D consume NNS avidly to offset added sugars, have intrinsically augmented intestinal SGLT-1 function and glucose absorption, and perturbed gut microbiota, they are intuitively at increased risk of NNS-evoked dysglycaemia relative to healthy individuals. Using data from prospective case-control studies of T2D nested within the US Nurses’ Health Study, exploratory factor analysis and multivariable-adjusted regression revealed that a behavioural pattern of high habitual NNS-containing beverage consumption was positively associated with an increased risk of T2D, as well as perturbed leptin and IGF signalling. The first of two preclinical studies then aimed to determine the precise contribution of the gut microbiota to NNS-evoked dysglycaemia, by supplementing mice with combined NNS sucralose-acesulfame-K in drinking water over two weeks, with or without concurrent broad-spectrum antibiotic (ampicillin and neomycin) to deplete gut microbiota. Sucralose-acesulfame-K augmented jejunal glucose absorption by 31% independent of gut microbiota depletion, and altered GLP-1 responses to jejunal glucose in a partly microbiota-mediated manner, showing disruption to both intestinal and microbial mechanisms of glycaemic control. Next, an intestine-specific T1R2-knockout mouse model was pioneered to directly assess the involvement of intestinal STRs in NNS-evoked dysglycaemia. Two-week supplementation of sucralose-acesulfame-K in drinking water attenuated plasma GIP responses to intragastric glucose independent of intestinal-T1R2, while plasma GLP-1 responses were attenuated in intestine-specific T1R2-knockout mice, extending evidence on the role of intestinal-STR in gut hormone responses that regulate glucose homeostasis in rodents. Finally, a clinical study was undertaken to determine the effects of high habitual NNS consumption on glycaemia in individuals with T2D. In contrast to previous findings in health, two-week diet supplementation with encapsulated sucralose and acesulfame-K in participants with well-controlled T2D had no effect on glycaemic, absorptive, insulinaemic or gut hormone responses to enteral glucose, likely due, in part, to a T2D-intrinsic “ceiling” of intestinal SGLT-1 function and absorptive capacity. Together, findings from this thesis deepen mechanistic insight of how high habitual NNS consumption may increase T2D risk, evidencing that sucralose and acesulfame-K are not inert but have pleiotropic effects on distinct mechanisms of glycaemic control. This reinforces the concept that these NNS should not be endorsed as alternatives to added sugars for healthy individuals, or individuals at risk of T2D, and demands larger, longer-term NNS exposure randomised controlled trials to further translate findings here to public health policy and industry practices.
  • ItemRestricted
    Interpretable Deep Learning for Medical Imaging and Computer Vision
    (2024) Wang, Chong; Carneiro, Gustavo; Frazer, Helen (St Vincent's Hospital, Melbourne); School of Computer and Mathematical Sciences
    Deep learning networks have gained popularity in computer vision and medical imaging, excelling in various application domains due to their exceptional capabilities in automatic feature extraction and discrimination. Despite these remarkable achievements, existing methods are often viewed as black-box models, making it hard for people to understand how they arrive at specific predictions from input images. This black-box problem has been handled with the development of new interpretable methods for deep learning networks. One of the most successful solutions is the prototypical-part network (ProtoPNet), which is a deep-learning classification method that can self-explain the model’s predictions by associating decisions on a testing image with learnable prototypes representing object parts of training images. A particularly interesting application of ProtoPNet models is in medical image analysis tasks, although these models do present some challenges. For instance, while ProtoPNet models offer good interpretability, they often fall short of achieving the same level of accuracy as black-box networks. Also, in multi-label tasks, which is a practical scenario in medical images, ProtoPNet tends to fail due to the high degree of entanglement of the learned prototypes. Moreover, ProtoPNet relies on single-level prototypes that cannot fully represent complex visual class patterns characterised by varying sizes and appearances, such as the ones shown by lesions in medical images. To address the issues above and improve the classification and interpretation performances on medical image analysis tasks, we propose numerous approaches that use knowledge distillation, reciprocal learning, disentangled prototype learning, and hierarchical prototypes. Another application of ProtoPNet is in computer vision tasks, where current models mostly rely on a point-based learning of trivial prototypes with limited representation power, often producing trivial (easy-to-learn) prototypes from the most salient object parts. Such issues cause relatively low classification accuracy in natural image recognition and difficulties in detecting Out-of-Distribution (OoD) inputs. We tackle these problems with our innovative learning of support prototypes, combined with the trivial prototypes, to achieve enhanced and complementary interpretations. Furthermore, we propose a new generative training paradigm to learn prototype distributions, together with a novel prototype mining strategy inspired by the game-theoretic horse-racing problem of Tian Ji. These innovations enable both interpretable image classification and trustworthy recognition of OoD samples. Results on standard fine-grained classification benchmarks show the effectiveness and advantages of the proposed methods.
  • ItemOpen Access
    Domain Adaptation Object Detection for Mobile Robots
    (2025) Shi, Xiangyu; Dayoub, Feras; Liu, Lingqiao; School of Computer and Mathematical Sciences
    Object detection is a critical capability for mobile robots navigating diverse and dynamic environments. However, standard methods struggle with domain shifts and real-time adaptability, limiting their deployment in real-world scenarios. To address these challenges, this thesis proposes two novel approaches: an Online Source-Free Domain Adaptation (O-SFDA) framework leveraging unsupervised data acquisition, and an Open-Vocabulary Object Detection (OVOD) model for domain adaptation in embodied environments. The first contribution focuses on improving adaptive object detection using OSFDA by introducing an unsupervised data acquisition framework. In mobile robotics, not all captured frames contain valuable information for adaptation, especially in the presence of strong domain shifts. Our method prioritises the most informative unlabeled samples for inclusion in the online training process, significantly enhancing adaptation performance. Empirical evaluation of real-world datasets demonstrates that this approach surpasses existing state-of-the-art (SOTA) techniques, underscoring the viability of selective data acquisition in improving real-time adaptability. The second contribution leverages OVOD as the base model for domain adaptation in indoor environments. To overcome the limitations of existing methods under domain shifts, we propose a Source-Free Domain Adaptation (SFDA) approach that adapts pre-trained models without requiring access to source data. This approach refines pseudo-labels using temporal clustering, incorporates multi-scale threshold fusion for robust adaptation, and employs a Mean Teacher framework enhanced with contrastive learning. Additionally, we introduce the Embodied Domain Adaptation for Object Detection (EDAOD) benchmark to evaluate performance under sequential changes in lighting, layout, and object diversity. Experimental results highlight substantial improvements in zero-shot detection and adaptability to dynamic indoor conditions, demonstrating the effectiveness of our approach. Together, these contributions advance the SOTA in adaptive object detection for mobile robots, addressing critical challenges posed by domain shifts and dynamic environments. By enabling more robust and flexible object detection, this work facilitates the deployment of mobile robots in real-world scenarios, enhancing their ability to navigate and operate effectively in complex, ever-changing conditions.
  • ItemRestricted
    Pretty Much a Saga of a Certain Type of Person: Charles Bukowski and Postmodern Humanism
    (2024) Adams, Benjamin Thomas; Treagus, Mandy; Murphet, Julian; School of Humanities : English, Creative Writing, and Film
    Both the literary and cultural reputation of American writer Charles Bukowski has been most powerfully defined by a series of ‘either/or’ dichotomies concerning his significance for questions of individual subjectivity, representations of gender, the role of public intellectuals and how comic modes of expression can be used in either inclusive or exclusionary ways. My thesis argues that, in all these areas, Bukowski’s writing and persona in fact complicate, challenge, and frequently exceed the bounds of these ‘either/or’ terms in which he is often read and represented. I suggest that, in exceeding the terms of these dichotomies, Bukowski aligns with and can be productively read through a postmodern humanist lens, which also seeks to reevaluate various binaries across philosophical, political, cultural, and literary fields of knowledge. This conceptual lens is non-essentialist and self-reflective, premised on the idea that ‘humanity’ or ‘the human’ should be retained as valuable categories of thought and inquiry while also remaining open to ongoing critical re-assessment. Regarding literary studies specifically, I argue for a postmodern humanist perspective that, as I see it, does not deny the ways in which texts and their authors can generate resistance among readers. Thus, this thesis takes a postmodern humanist approach to explore the complexities and frequent contradictions of reading Bukowski as an example of how writers, texts and readers interact – both positively and negatively – in a range of philosophical, aesthetic, and socio-political ways. In short, it seeks to highlight the ways in which ‘either/or’ modes of analysis can be rethought in ‘both/and’ critical terms.
  • ItemRestricted
    Building the Innovation Economy: The History of Innovation Policies in South Australia, 1982 to 2011
    (2024) Harms, Rebekah Joyce; Ankeny, Rachel; Garrett-Jones, Sam (University of Wollongong); School of Humanities
    The closure of General Motors-Holden's South Australian automotive manufacturing plant in 2017, marking an end to automotive manufacturing in Australia, caused a significant blow to the South Australian economy and radically altered the composition of the state’s economy. The decline of South Australia’s traditional manufacturing industry has served to further justify attempts at shifting the state towards more innovative industries. In this thesis, I consider how successive South Australian governments have attempted to build the state’s innovation economy through an exploration of key innovation policies in South Australia from 1982 to 2011. In the thesis, I focus on four examples of innovation policy within science- and technology-related areas. In chapter one, I explore the way in which state governments used education to foster skills that were deemed necessary within the innovation economy. In chapter two, I consider the key role that innovation clusters have played in successive state government's innovation policies. In chapter three, I explore the dual economic and environmental factors which drove the growth of the state’s renewable energy industry. Finally, in chapter four, I trace the debate that occurred around the commercial growth of genetically modified canola. The application of an historical lens to innovation policy has enabled me to determine some of the successes and failures of the state government’s approach to innovation policy. Lessons learned from the South Australian experience will be of relevance to other post-industrial regions which have faced similar challenges as they seek to transition towards an innovation economy.
  • ItemRestricted
    Droplet etching epitaxy for the nanostructuring of strain-free GaAs quantum dots
    (2024) Gossink, Declan Jotay; Solomon, Glenn; Lewis, David; School of Physics, Chemistry and Earth Sciences : Physics
    Non-classical states of light in the form of single and entangled photons enable tests on fundamental physics and are the basis of many emerging quantum technologies. Since their inception, semiconductor quantum dots have been recognised as excellent sources of quantum light, and are considered to be one of the more promising sources for a scalable quantum architecture. To date, the highest performing quantum dots, in terms of multi-photon suppression, indistinguishability, and brightness, are realised by a process known as droplet etching epitaxy. The key result of the droplet etching technique is the formation of nanoholes on an epitaxial semiconductor film, which may then be subsequently filled in with a semiconductor of lower band-gap to create quantum dots. Invariably, the characteristics of the final quantum dots are dependent on the nanohole geometry. This thesis describes the experimental realisation of nanoholes of varying geometry using aluminium as etchant material on an AlGaAs epitaxial film. Using molecular beam epitaxy, the effects of substrate temperature, arsenic background pressure and droplet deposition flux on nanohole geometry are systematically studied. The nanoholes are then regrown with GaAs and capped with AlGaAs to create unstrained quantum dots. Light emission from as-created single and ensemble quantum dots are investigated. This work demonstrates the ability of droplet etching for structurally engineering low density quantum dots with emission properties desirable for quantum optic and nanophotonic experimentation.
  • ItemRestricted
    Evaluation of the in vitro and in vivo activities of robenidine and its analogues against human and veterinary pathogens
    (2024) Pi, Hongfei; Trott, Darren J.; School of Animal and Veterinary Science
    Multidrug resistant pathogens are a public health threat worldwide. Leading drug-resistant bacterial pathogens (Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii and Pseudomonas aeruginosa) were responsible for 929,000 deaths due to antimicrobial resistance (Murray et al., 2022) and over 3.5 million deaths associated with antimicrobial resistance in 2019, projected to increase to >10 million/year by 2050 if not addressed, with cumulative economic impact rising to over 100 trillion USD. Therefore, there is an urgent medical need to develop new, broad-spectrum antimicrobials with novel chemistry and mechanisms of action, preventing further cross-resistance to existing drug classes. This thesis aimed to characterise two new antibacterial drug classes (robenidine and its analogues, as well as a repurposed fasciolicide) with novel mechanisms of action as new therapeutic options against multidrug resistant pathogens. The anticoccidial drug robenidine (NCL812) and one of its analogues (NCL195) possessed excellent bactericidal activity against methicillin-resistant S. aureus (MRSA), vancomycin-resistant enterococci (VRE) and Streptococcus pneumoniae, returning minimum inhibitory concentration (MIC) range of 2-8 μg/mL against these bacteria. Additionally, monomeric analogues NCL259 and NCL265 showed the best activity against Gram-negative “KAPE” pathogens (K. pneumoniae, A. baumannii, P. aeruginosa and E. coli), returning MIC range of 2–64 μg/mL for both compounds against these bacteria. Additionally, time-kill kinetics assays showed that both NCL259 and NCL265 were bactericidal against E. coli and K. pneumoniae. Therefore, NCL812 and the three analogues (NCL195, NCL259 and NCL265) were subjected to further investigation. NCL812 in the presence of sub-inhibitory concentrations of Gram-negative outer membrane permeabilizers ethylenediaminetetraacetic acid (EDTA) or polymyxin B nonapeptide (PMBN) displayed a 4- to 256-fold increase in the susceptibility of the tested Gram-negative KAPE pathogens. Mammalian cell toxicity studies showed IC50/MIC ratio ranging from 6-fold (for Gram-negative pathogens) to 3-fold (for Gram-positive pathogens) in the presence of EDTA, and IC50/MIC ratio ranged from approximately 2- to 24-fold against all the cell lines tested in the presence of PMBN. NCL195 demonstrated synergistic activity against P. aeruginosa, E. coli, K. pneumoniae, and Enterobacter spp. strains in the presence of sub-inhibitory concentrations of EDTA, PMBN, or polymyxin B (PMB), and was less toxic than NCL812 against mammalian cell lines tested. In bioluminescent Streptococcus pneumoniae and S. aureus murine sepsis challenge models, mice that received two 50 mg/kg intraperitoneal doses of NCL195 4 or 6 h apart exhibited significantly reduced bacterial loads and longer survival times than untreated mice. NCL259 and NCL265 demonstrated moderate antimicrobial activity against a large range (n = 236) of human and companion animal Gram-negative pathogens, with NCL265 being consistently more active, achieving lower MICs in the range of 2–16 μg/mL. NCL259 and NCL265 in combination with sub-inhibitory concentrations of PMB elicited a synergistic or additive activity against the strains tested, reducing the MIC of NCL259 by 8- to 256- fold and the MIC of NCL265 by 4- to 256- fold. Further testing of strains resistant to NCL259 and NCL265 indicated a significant role for the AcrAB-TolC efflux pump from Enterobacterales in imparting resistance to these robenidine analogues. However, NCL259 and NCL265 had much higher levels of toxicity to a range of human cell lines compared to NCL812, precluding their further development as novel antibiotics against Gram-negative pathogens. The fasciolicide selectively killed methicillin-sensitive (MSSA49775) and methicillinresistant S. aureus (MRSA USA 300 and MRSA 610) and S. pseudintermedius (3 clinical S. pseudintermedius isolates and 10 clinical MRSP isolates) at a MIC range of 2–4 μg/mL, and VRE (VRE35C, VRE60FR and VRE252) at a MIC range of 4–8 μg/mL. The fasciolicide also inhibited key Gram-negative bacteria in the presence of sub-inhibitory concentrations of PMB, returning MIC90 values of 1 μg/mL for E. coli, 8 μg/mL for K. pneumoniae, 2 μg/mL for A. baumannii and 4 μg/mL for P. aeruginosa. Repeated 4-hourly oral treatment of mice with 50 mg/kg fasciolicide after systemic S. aureus challenge resulted in a significant reduction in S. aureus populations in the blood up to 18 h post-infection (compared to untreated mice) but did not clear the infection from the bloodstream, consistent with its in vivo bacteriostatic activity. In conclusion, the promising in vitro and in vivo activity obtained for NCL195 and the fasciolicide indicate that they are viable candidates for future pre-clinical evaluation to treat serious MDR infections either alone or in combination with sub-inhibitory concentrations of Gram-negative outer membrane permeabilizers (e.g., PMB). This aligns with promoted new approaches to combat the emergence of antimicrobial resistance and multidrug resistance (drug repurposing and synergistic combination therapy). Further medicinal chemistry and pharmaceutical improvement of NCL195 and the fasciolicide may enhance their PK/PD parameters, increase potency, solubility and selectivity for advancement for future clinical development.
  • ItemRestricted
    Effectiveness of Rolling Dynamic Compaction using a Finite Element Methodology
    (2024) Bradley, Andrew Charles; Jaksa, Mark; Kuo, Yien-Lik; School of Architecture and Civil Engineering
    Rolling dynamic compaction (RDC) is a ground improvement technology which has become increasingly adopted in the applications of the densification of subgrade soils in situ, economic compaction of thick layers of in-filling deep excavations, proof rolling and construction of mine haul roads. Additionally, RDC finds application in the agricultural industry to compact soils in irrigated areas, to reduce permeability and water loss. Despite the advancements made by field case studies in showcasing the potential of RDC to improve soils at depth, their scope is limited. Unless the site conditions closely match those of a published case study, project engineers often rely on engineering judgment, necessitating expensive field trials to confirm the technology's efficacy. This uncertainty, stemming from a lack of comprehensive understanding, hinders the widespread consideration of RDC. To address this, additional investigations across a diverse range of initial conditions and approaches are needed. Representing RDC within a numerical framework offers a cost-effective alternative, facilitating in-depth investigations tailored to specific site conditions. This research aims to articulate and represent RDC using the Broons BH-1300 8 tonne 4-sided impact roller on coarse-grained soils within a Finite Element (FE) model framework. The objective is to numerically investigate and quantify the effectiveness of RDC on coarse-grained soils, contributing to a broader understanding and confidence in subgrade soil improvement. A comprehensive FE model formulation of RDC was developed, simulating the BH-1300 8 tonne 4-sided impact roller on coarse-grained soil subjected to a maximum of 30 passes. Validation against a controlled field test yielded good agreement of estimates for soil settlement and improvement profiles. The formulation integrated observed typical motion kinematic profiles of the impact roller through high-speed photography during field testing. Examination of module kinematics concluded that the average peak kinematic energy of the roller to be 62 ± 3 kJ, and energy delivered to the ground per impact to be 23 ± 4 kJ, with a 95% degree of confidence under typical operating conditions. Results from the FE model formulation reasonably reproduced field observations available in the literature. An enhanced understanding of RDC’s impact in relation to ground response emerged through FE simulations, providing distributions and trends for stresses in situ, along with estimates for peak particle acceleration and velocity with depth and along the soil surface. Although resource and time-intensive, the FE model and subsequent post-analysis offer a viable alternative to investigating the potential of RDC, which is traditionally assessed through field trials. The presented FE model herein may offer valuable insights into relationships between initial conditions (soil parameters, roller design, and kinematics) and resulting settlement and improvement profiles. The FE model is however, not without limitation. The limitations of the FE model are highlighted, along with suggestions for overcoming these limitations in future research. Nevertheless, the formulation presented herein provides a significant enhancement in understanding the effectiveness of RDC for practicing engineers and provides a tool to investigate the effectiveness of RDC using a FE methodology.
  • ItemRestricted
    Cold plasma-induced quality changes in foods and beverages and plasma’s potential use in space exploration
    (2023) Warne, George; Hessel, Volker; Fisk, Ian (Primary Supervisor, University of Nottingham); Williams, Phil (University of Nottingham); Wilkinson, Kerry; Tran, Nam Nghiep; School of Chemical Engineering
    The long duration and extreme environment of space missions have led to significant challenges (weight loss and food aversion) in providing sustenance for astronauts. Traditional treatments that extend the shelf-life of food are often synonymous with reduced sensorial quality. Furthermore, future missions, such as to Mars (in the late 2030s), will increase the need for food with prolonged long shelf-life, of diverse sensory appeal. Cold plasma (non-thermally ionised gas) can reduce the negative impacts of thermal processing methods (e.g., nutritional loss, colour leaching and texture degradation), whilst also reducing microbial growth and increasing shelf-life. Cold plasma consists of free electrons, ions, reactive atomic and molecular species, and ultraviolet radiation that can generate various chemical reactions which can be controlled through various input parameters. However, there has been little research into the sensory impacts of cold plasma treatments, especially the effects on flavour, which this PhD thesis therefore sought to address. The potential for cold plasma (a known food preservation processing tool) to enhance food flavour through intentional volatile aroma modification of foods and how this can impact food development for long-distance space missions was examined. This thesis explored five core research questions on flavour and the use of plasma, on Earth and in extreme environments, via a literature review, a preliminary study, three research articles, and two conceptual designs. The literature review focuses on how cold plasma can modify biomolecular and organoleptic properties, and the research gap into how cold plasma affects food flavour. This addressed the first research question: “What are the trends in current plasma research, and are there potential benefits for applications on Earth or in space?”. This review found that cold plasma has not only been shown to curtail microbial growth, enhancing food safety and shelf-life but also does so without diminishing certain essential organoleptic attributes (e.g., flavour). This dual benefit is especially crucial for long-term space missions where the preservation of both taste and nutritional quality is of the essence. This was followed up by preliminary research on the effects of cold plasma treatment on the key flavour compounds in fresh and freeze-dried strawberries (as potential space food), culminating in publication of a research article focused on optimising and characterising the specific interaction effects of cold plasma, freeze-drying, and extreme environmental effects (“zero-averaged gravity"). An aroma analysis method was also designed for a microgravity environment. This research answered the questions: “Can plasma enhance volatile aromas lost during freeze-drying?” and “What does this mean for preparing freeze-dried space food?”. Freeze-drying, a staple in space food preparation, typically leads to a loss of key volatile aroma compounds (predominantly esters). However, the aromas lost during freeze-drying of strawberries were substantially regained with the application of cold plasma. This discovery hints at the possibility of space food that is both long-lasting and flavourful. Cold plasma was then evaluated as a strategy to enhance esterification and, therefore, the sensory impact of these volatile aroma compounds. The mechanistic interactions between different plasma types and esterification elements were highlighted to instigate and pave the way for future bespoke aroma modifications, hinting at the untapped potential in flavour science. Plasma was also evaluated for its ability to break bonds (i.e., those within glycosylated aroma compounds) to reduce the impact of smoke aromas in wine following vineyard exposure to bushfire smoke, which can negatively impact the wine industry. This study sought to answer the following questions: “Can plasma be used to modify other compounds, i.e., can it assist in the joining or breaking of bonds between reactants?”, “Can the mechanical interactions between plasma and reactants be determined?” and “Could this be useful on Earth or in space?”. Plasma was not just found to preserve flavour, but also potentially catalyse esterification processes, turning less desirable aromas (e.g., hexanoic acid) into more pleasant ones (e.g., methyl hexanoate). This finding could significantly impact gourmet food preparation on Earth and also hints at the vast potential plasma holds in transforming food in space, all without extensive resource requirements. While plasma treatment decreased the concentration of some smoke-derived volatile phenols, it was not powerful enough to cleave glycosylated phenols. To improve plasma’s applicability in space, future projects are being developed including a dynamic microfluidic microplasma system that could potentially be used in microgravity was designed. This addressed two questions: “Can plasma be generated inside a system for liquid applications used in microgravity?” and “How could this fit into what’s onboard the ISS?”. Since flavour enhancement should not stop at solid foods, this research proposed the use of a dynamic DBD-based microfluidic microplasma system that could process liquids in microgravity. With this, there is potential to enhance liquid foods onboard space missions (e.g., in combination with the ISSpresso machine), making them tastier and safer. After improving the aroma of foods typically used for space and designing a plasma system that could be used in space, subsequent research is being developed to make microgravity-suitable spectrophotometer for aroma analysis in microgravity sought to answer the following questions: “What analytical equipment can be used in space to monitor changes in quality?”. Monitoring is crucial to ensuring consistent flavour quality, especially in the challenging environment of space. This study introduced novel methods, such as a compact spectrophotometer setup and an adapted volatile extraction technique that could be suitable for use in microgravity environments, promising real-time flavour analysis, even in space. This thesis shows novelty in flavour modification, flavour in space, plasma-induced esterification, and enhanced analytical techniques for future flavour and space research. By uncovering the myriad of potential applications of cold plasma, this thesis not only shows promise for advancing food quality on Earth, but also sets the foundation for gourmet experiences in space.
  • ItemRestricted
    On the role of Antarctic sea-ice loss in driving swell-induced flexure of ice shelves
    (2025) Teder, Nathan; Bennetts, Luke; Massom, Rob (Australian Antarctic Division); Reid, Phil (Bureau of Meteorology); School of Computer Science and Mathematics
    This thesis investigates the impact of sea ice losses on increasing ice shelf flexure forced by ocean swell, using satellite imagery for ice shelves and sea ice, a model hindcast for ocean waves, and mathematical modelling for the coupled system. It is motivated by the link between the disintegration of the Larsen B and Wilkins ice shelves and prolonged regional deficits in offshore sea ice. This allowed swell to reach the ice shelf and amplify fractures until the point of large-scale ice shelf calving. The thesis is split into three projects, with the first looking at the seasonality of sea ice-free conditions for ice shelves around Antarctica. An algorithm is developed to determine when an ice shelf has a sea ice-free ocean corridor adjacent to it and to extract wave statistics from areas that could directly reach the ice shelf. The results show that the average corridor season for ice shelves is typically between late-December to mid-April. However, there are large variations between ice shelves due to regional differences influencing sea ice. A consequence of this variability is that some ice shelves, such as the Fimbul, Shackleton, Wilkins, and Ross, record a climatological average of >4 m swell at certain times throughout the year. The second project investigates how swell flexure influenced the large-scale calving events that occurred to Voyeykov Ice Shelf in 2007, and Wilkins Ice Shelf in 2008. A seven-year dataset of the length of the sea ice barrier and swell-induced flexural stresses is calculated for the Wilkins and Voyeykov Ice Shelf fronts. Both ice shelves recorded multiple periods of high accumulated swell-induced flexural stresses in the eighteen months before calving, alongside the coincident loss of consolidated coastal landfast sea ice (fast ice) and build-up of flexure on the ice shelf front. A conceptual model is proposed for how swell flexure preconditions calving, with a discussion on other calving events that potentially fit this model. The third project is on the back-to-back record Antarctic sea ice extent lows in 2022 and 2023, and how they influenced swell-induced flexural stress. A ten-year record of swell flexure was built between 2014–2023 which took into account changes in near-shelf thickness, sea-ice cover ”length” (length of sea ice protecting an ice shelf from incoming swells), and wave statistics for fourteen ice shelves around Antarctica. While 2023 recorded high circumpolar swell flexure, it was below average in 2022, due to relatively weak incoming swell. Further investigation into swell flexure and its potential causes indicates a significant relationship with changes in ice cover and peak period. The impact of future conditions based on enhanced current extremes in sea ice, waves, and ice shelf thickness showed that swell-induced flexure could make a considerable contribution to total stress for most ice shelf fronts if multiple extremes occur simultaneously. This thesis quantifies and interprets how a loss of Antarctic sea ice influences swell-induced flexure on ice shelves by showing the seasonality of incoming swell, alongside how it preconditions large-scale calving, and how swell-induced flexure develops if climatic extremes occur in sea ice.
  • ItemRestricted
    An early Eocene near-polar flora from eastern Gondwana (Tasmania, Australia) — systematics, adaptations and palaeobiogeographic implications of the non-flowering plants
    (2024) Slodownik, Miriam Andrea; Hill, Robert S.; Goodfellow, John; School of Biological Sciences
    The Macquarie Harbour Formation (MHF) of Tasmania, Australia, contains well-preserved fossil plants from a subpolar (~65°S) lowland forest that grew approximately 53–50 million years ago during the Early Eocene Climatic Optimum. This assemblage represents one of the oldest and southernmost post-Cretaceous macrofloras in Australia. These factors render the MHF assemblage as an ideal resource for studying high latitude forests and the origin and evolution of Australia’s floras, which were profoundly influenced by decreasing global temperatures and continental northward drift since the early Eocene. In this thesis, I examined the non-flowering plants of the Macquarie Harbour flora and described them systematically to provide a comprehensive overview of their diversity. Furthermore, I discussed their ecophysiological strategies to life at high latitudes, and their biogeography. This work was based on existing fossil collections from several outcrops, in addition to 419 new specimens that I collected and prepared. I revealed anatomical details using advanced imaging techniques such as scanning electron microscopy (SEM), ultraviolet macro- and fluorescence micro-photography, and X-ray tomography. I systematically described, revised, and phylogenetically analysed the new and existing taxa, and derived biogeographic data by conducting a comprehensive review of the published literature. In the first chapter, I outlined the geological and palaeoenvironmental background for the Macquarie Harbour floras. I summarised the stratigraphy of the Macquarie Harbour Formation and my geological field observations. Furthermore, I outlined the general biogeographic trends of the Southern Hemisphere in the Cenozoic and introduced the concepts of south polar forests and polar refugia. In the second chapter, I have presented new fossils of Araucarioides linearis (Araucariaceae), including leaves and the first reproductive organs of the genus. I revised the generic and specific diagnoses, and designated A. sinuosa as a junior synonym. Phylogenetic analyses revealed that Araucarioides forms the sister to the ‘agathioid clade’, which comprises extant Agathis and Wollemia. This result, in combination with presence of Araucarioides in the Late Cretaceous, provides solid evidence for a pre-Cenozoic divergence age of the agathioid from the Araucaria clade. Araucarioides was restricted to the high palaeolatitudes and showed morphological adaptations to the pronounced seasonality. These factors may have also facilitated its survival during the end-Cretaceous mass extinction (~66 Ma). Its extinction was likely linked to the changes in light regime and climate, resulting from the northward movement of Australia and New Zealand. Next, I focused on the fossil record of Komlopteris in the Southern Hemisphere. I discovered abundant new fossils of Komlopteris cenozoicus. These anatomically preserved fossils confirmand the affiliation of the genus with the extinct gymnosperm order Umkomasiales and thus confirms that these fossils represent the youngest pteridosperms (“seed ferns”) to date. Furthermore, a literature review and study of museum collections revealed ten further Komlopteris species of Jurassic to Eocene age from across Gondwana. The lineage, therefore, survived the end-Triassic and end-Cretaceous biotic crises, probably in climatically buffered habitats at high latitudes. Komlopteris’ ultimate demise was linked to the ongoing diversification of angiosperms and the northward migration of Australia. In the fourth chapter, I have provided an overview of all non-flowering plants from the early Eocene Macquarie Harbour Formation. I confirmed the presence of at least twelve distinct species, encompassing nine conifers (four Araucariaceae, four Podocarpaceae, and one Cupressaceae), a cycad (Zamiaceae), a pteridosperm (Umkomasiaceae), and a fern (Schizaeaceae). A reconstruction of leaf shapes revealed different strategies for the optimisation of light harvesting among the identified taxa. The co-existence typical canopy forming and canopy emergent trees, and the presence of understory elements (such as cycads and ferns) suggests a complex forest structure. Whole flora comparisons with contemporaneous assemblages revealed a striking floristic similarity to the Patagonian floras. This suggests that a near-continuous circum-Antarctic phytogeographic zone of ever-wet rainforests may have existed during some time of the early Cenozoic hothouse period. Lastly, I summarised the results of the data chapters under the overarching goals of this thesis: 1, diversity and abundance; 2, adaptations to life at high latitudes; and 3, palaeobiogeography of the Macquarie Harbour flora.
  • ItemRestricted
    Manage illegal dumping of solid waste
    (2024) Du, Linwei; Zillante, George; Chang, Ruidong; School of Architecture and Civil Engineering
    With the growing generation of waste globally, illegal dumping events are increasing as well. The generation of illegal dumping has a range of environmental, economic and social impacts. It has become a global concern to properly manage illegal dumping. Illegal dumping is a kind of behaviour. Therefore, unlike other waste management issues, managing illegal dumping requires both the timely detection of existing illegal dumping to minimize its impacts and avoid its recurrence in the future. However, there is a lack of research that contributes to illegal dumping management systematically and comprehensively. To investigate this issue, this study aims to understand the current situation of illegal dumping and manage it by combining two perspectives: technical management and policy management. This was achieved by: (1) Identifying illegal dumping high-risk areas and their characteristics through technological methods. (2) Investigating and analysing the current policies related to the management of illegal dumping, taking Australia as an example. (3) Combining system dynamics to assess the impacts of different policies on the management of illegal dumping and propose targeted suggestions. Data were collected through the literature review, government database and model simulation in order to achieve all the research objectives. A number of interesting results were found in this study. For example, by combining GIS and Python, this study constructed a model that could accurately identify illegal dumping related risks in the study area. This model could accurately identify areas with different levels of risk of illegal dumping.This result is extremely beneficial for practical management. Secondly, this model is also able to rank the factors that affect the occurrence of illegal dumping, which could help managers identify the priority factors. Similarly, this study examines the effectiveness of different waste management policies in Australia that may relate to illegal dumping. This study found that relying solely on current waste management policies and recycling policies will make it difficult for Australia to avoid illegal dumping in the future. Through model simulations, this study concludes that the effects of policies that act directly on illegal dumping management, such as publicity policies, cannot be ignored. This study quantified the effect of the publicity policy with the help of system dynamics. In addition, by comparing the effects of different investment options on illegal dumping for similar investments, this study identifies the best investment option for the study area. This study considered managing illegal dumping from both technical and policy perspectives. This study provides a comprehensive research framework to explore the management of illegal dumping issues. With respect to methodology, this study improves the accuracy of the research models in detecting illegal dumping sites and identifies the influencing factors that need to be prioritised for management at various risk levels. Policy assessment models developed in this study could contribute to future policies on illegal dumping. In addition, this study demonstrates a methodology for assessing the effectiveness of waste policies related to illegal dumping.
  • ItemRestricted
    Impacts of Geomagnetic Disturbances on the Low- and Mid-Latitude lonosphere over Australia
    (2024) Camilleri, Tristan Anthony; Cervera, Manuel; Ward, Bruce; Mackinnon, Andrew; School of Physics, Chemistry and Earth Sciences
    The ionosphere is an ionised region of the Earth’s upper atmosphere which can be used to refract high frequency radio waves, facilitating long-range radio communication and over-the-horizon radar. Space weather events which lead to geomagnetic disturbances can influence and disturb the ionosphere, altering its strength and structure for several days. This alters the ionosphere’s refractive properties, and thus has implications for the performance of high frequency systems. The effects of geomagnetic storms can be characterised by the modelling of parameters such as the strength of the ionospheric equatorial electric field and the vertical plasma drift velocities in the equatorial ionosphere. The response of the ionosphere can also be observed through the changes in the electron number density and height of the peak electron number density over a region of the Earth. A new model was created which uses ground-based magnetometer data from Thailand and the Phillipines to predict the vertical plasma drift velocity in the equatorial ionosphere, which will influence the ionospheric plasma transport mechanisms over northern Australia. This was used concurrently with an existing ionospheric equatorial electric field model and data from the Jindalee Operational Radar Network (JORN) and Australian Bureau of Meteorology high frequency ionospheric sounder networks to analyse the response of the ionosphere over Australia to geomagnetic storms in March 2015, June 2015, October 2016, and September 2017. The new equatorial ionospheric vertical plasma drift velocity model was developed from magnetometer and ionospheric drift velocity measurements from the Jicamarca Radio Observatory, Peru. It was found to produce a significant improvement in accuracy over existing models in the literature. Interplanetary parameters and global magnetic indices were also used to characterise the geomagnetic disturbances. It was found that the ionospheric response over Australia to geomagnetic disturbance is strongly dependent on electrodynamic processes, neutral atmospheric conditions, and the preconditioning of the magnetosphere-ionosphere-thermosphere system.
  • ItemRestricted
    Launching a Law and Returning for Reform: A Historical and Comparative Analysis of Australia's Law for Space
    (2024) Lisk, Joel Willis; Nosworthy, Beth; Henderson, Stacey (Flinders University); de Zwart, Melissa; Adelaide Law School
    Australia has long been seen as an attractive place to base space launch activities. With wide, open, and unpopulated areas, launch facilities can be established in areas presenting low risk to the general public. Additional location benefits include Australian territories providing opportunities for near-equatorial launches and regions which could support launches into polar orbits. This is accompanied by Australia’s long-term stable political and social environment. These factors in mind, it is unsurprising that there have been a range of proposals to establish launch facilities across Australia; from Christmas Island, off the north-west coast of the Australian continent, to central-Australia and far-north Queensland. Australia has engaged with the development of international law for the space domain within the United Nations mechanisms since the dawn of the space age, having participated as a founding member of the ad hoc committee and its successor on the peaceful uses of outer space. Australia is also one of the few States to have ratified all five treaties specifically developed for outer space. In the late-1990s the Australian Federal Government responded to proposals to establish privately operated launch facilities on Australian territories with the Space Activities Act 1998, leading Australia to join a short list of countries to enact national laws that permitted national space activities to take place. Regulations that supported this law emerged in 2001 and the law itself was amended in 2002. The launch facility proposals that dominated the 1990’s and early-2000’s never eventuated. The next decade witnessed a significant change in the nature of the global space industry, with private entities taking the lead in developing technologies with a shift towards small satellite platforms for lower-cost ventures. In response to these changes and a resurgent Australian space industry, the Australian Government undertook a review of the Space Activities Act. Legislative changes in 2018 transformed the Space Activities Act into the Space (Launches and Returns) Act 2018. This Thesis explores the development of the Space Activities Act and its evolution to becoming the Space (Launches and Returns) Act 2018 by examining historical records including previously confidential cabinet records, parliamentary records and large numbers of public submissions related to the development, implementation, and reform of Australia’s space law over time. Legislative developments in other States are also considered to provide a deep and contextual case study of Australia’s law for space launch and return activities. This Thesis stands as a definitive legislative history of the Space Activities Act and later Space (Launches and Returns) Act 2018.
  • ItemRestricted
    Construction of the Vicia sativa (common vetch) reference genome and exploration of genetic resources
    (2023) Xi, Hangwei; Searle, lain; Austin, Jeremy; School of Biological Sciences
    With the continuous rise of the human population, exacerbated by global warming and water scarcity, humanity is facing a looming challenge in sustained food supply. In this context, Vicia sativa (common vetch), a crop characterized by high protein and nutritional value along with unique drought tolerance, is poised to become a successful human food source. However, the presence of antinutritional compounds like β-cyanoalanine (BCA) and γ-glutamyl-β-cyanoalanine (GBCA) in V. sativa seeds, which are toxic to monogastric organisms, currently restricts its use to green manure and ruminant livestock feed. As an overlooked orphon crop, the paucity of genomic resources for V. sativa has significantly hindered its breeding and conservation efforts. This study aims to collect and explore the genetic resources of V. sativa. In Chapter 2, I assembled the reference genome of V. sativa (n= 6). Utilizing 44x Oxford Nanopore sequencing data, the genome was assembled to contig level with a contig N50 of 684 kbp. Contigs were then polished with Illumina data, and finally assembled to chromosome level using Hi-C data. This assembly resulted in six chromosomes and two organelle genomes, with a complete BUSCO score of 98% and LAI of 12.96, indicating a high reference genome quality. Utilizing RNA-seq data, 53,218 genes were identified in the V. sativa genome. A shared whole-genome duplication event with legumes was identified, and the substitution rate of V. sativa was calculated as 8.02 × 10-9 per site per year. Chapter 3 presents a genome-wide variation map of 279 V. sativa samples. A significant population structure was observed within the V. sativa population, with the highest nucleotide diversity found in the Middle Eastern population, supporting the region as the center of origin for V. sativa. Selective sweep analysis revealed selection signals in multiple populations. For instance, positive selection of the VsSOC1/VsAGL20-like and the photoperiod-sensitive FTb2 gene in the higher latitude pop2 population suggests selective pressure on flowering time likely due to latitude variation. In Chapter 4, whole genome sequencing of the white seed coat gg1 mutant and green seed coat Lov2 variety led to the identification of two alleles of the VsA2 gene controlling seed coat color variation in V. sativa. In Lov2, a mutation of a highly conserved glycine to tryptophan amino acid mutation at position 153 in of the A2 protein likely causes the green color seed coat. In gg1, a 12 bp deletion in one of the functionally important WD40 domains in VsA2 was associated with the white seed coat phenotype. PCR analysis revealed that this deletion, also found in two other white-seeded varieties gg2 and gg3 discovered alongside gg1, indicate they are originated from the same mutation event.
  • ItemRestricted
    Exploring Alternative Strategies in Antibiotic Discovery to Tackle Multidrug-Resistant Bacteria
    (2024) Wyllie, Jessica Anne; Soares da Costa, Tatiana; Trott, Darren; Unsworth, Nathan (Defence Science and Technology Group); School of Agriculture, Food and Wine
    Although most bacteria were once susceptible to antibiotics, our reliance on the current arsenal has led to significant misuse and overuse, contributing to the emergence of antibiotic-resistant bacteria. Unfortunately, we are now rapidly approaching a ‘post-antibiotic’ era, where infections once routinely treated with antibiotics are no longer treatable. In fact, infectious disease is the second leading cause of death worldwide, further highlighting the urgent need to develop new antibiotics. However, the rate at which resistance is emerging has surpassed the rate at which new antibiotics are entering the market. On top of this, the clinical shelf-life of these life-saving therapeutics is dwindling, with resistance reported to most agents within 2 to 3 years of their clinical introduction. Thus, it is critical that we explore diverse strategies to increase the number of therapeutics entering the clinical pipeline. This thesis focused on providing proof-of-concept for three distinct strategies to help tackle the antibiotic resistance crisis: re-engineering existing antibiotics, exploring novel metal complexes as antibiotics, and identifying inhibitors of novel antibiotic targets. The first approach centred on re-engineering our current arsenal of antibiotics. To do so, we exploited the dimerisation phenomenon of vancomycin to develop a series of novel vancomycin dimers linked through a shapeshifting bullvalene core. These dimers possessed improved potency against bacterial strains resistant to vancomycin, and the inclusion of the fluxional bullvalene core reduced the propensity for susceptible bacteria to develop resistance. These results demonstrate the potential of dimerising antibiotics, which could allow us to exploit the synergistic relationship of two pharmacophores. The second approach explored the potential of novel metal complexes as antibiotics to circumvent the systemic bias that only organic compounds can be successful antibiotics. Specifically, we examined a series of silver and gold N-heterocyclic carbene complexes for their antibacterial activity, with the silver compounds more potent against Gram-negative bacteria whilst the gold complexes were more potent against Gram-positive bacteria. Furthermore, these complexes demonstrated reduced propensity for resistance development, indicating that metal complexes may provide a durable solution to the antibiotic resistance crisis. The third approach focused on identifying inhibitors of underexplored antibiotic targets through virtual high throughput screening. To do so, we developed a virtual screening pipeline that resulted in the identification of several inhibitors of the biosynthesis pathway of UDP-N-acetylglucosamine, an essential building block of bacterial peptidoglycan. Specifically, we identified the first inhibitors of phosphoglucosamine mutase (GlmM) and the first dual-target inhibitors of GlmM and the bifunctional glucosasmine-1-phosphate acetyltransferase/N-acetylglucosamine-1-phosphate uridyltransferase (GlmU) enzyme, demonstrating the validity of our virtual screening pipeline. In summary, this thesis provides proof-of-concept that by employing non-traditional antibiotic discovery strategies, we may increase the number of therapeutics entering the antibiotic discovery pipeline and protect future generations from a ‘post-antibiotic’ era.
  • ItemRestricted
    Not Promising: An Extract from a Novel With an Exegesis on Debt and the Bildungsroman
    (2023) Sutcliffe, Alex William; Flanery, Patrick; Murphet, Julian; School of Humanities : English, Creative Writing and Film
    The creative artefact of this thesis comprises an extract from the novel, Not Promising. The novel follows two friends, neither of whom is particularly adept at or interested in being in their world, as they try to convince a third friend that she does want to stay in the world. Their world comprises the psychiatric and educational institutions, hotels and share-houses of Adelaide. A provincial capital, the city is both a microcosm of deindustrialising economies and a particular place that begets its own mass neurosis or spirit. The novel participates in the tradition of the Bildungsroman; the imperatives to self-formation and social integration are troubled by the same socioeconomic structures that demand them (Stević 14–15)—but also by how the protagonists set about fulfilling them. The extract comprises the first part of the novel. It introduces characters at impasses that, in other conditions, may have been the fulfilment of their maturity (graduation, home-ownership) and develops the narrative consciousness generated by the belatedness and irony of this impasse. The extract brings the narrative to a point at which the protagonists make the promises on which, mistakenly or not, they stake their place in the world. The exegesis, Crises of Maturity, reads the relation between structural debt and the Bildungsroman. Chapter One briefly surveys the literature on the genre. The criticism disagrees on whether the Bildungsroman is in crisis because of crises of social integration or if it is a genre about those crises. Given the sustained desire of protagonists and readers for social integration and selfformation, the chapter develops a theory of how promises operate in the genre even when social integration is in crisis in order to account for the sustained appeal (and possibility) of the Bildungsroman. Structural debt, because it at once demands and troubles socioeconomic integration, causes a crisis for the genre. The chapter concludes by exploring how debt permeates and precludes other promises on which the genre is premised. Chapter Two reads two recent Australian Bildungsromane, Andrew McGahan’s Praise (1992) and Madeleine Watts’s The Inland Sea (2020). The possibilities for their respective narrators’ Bildung are conditioned by the development of the debt economy in Australia in the generation between them. Their narrative techniques both function and fail under structural debt, so the concluding chapter speculates on what forms of social integration—and thus what forms of the Bildungsroman—are possible under structural debt. It aims to show the troubled phase of development from which my creative work emerges. The exegesis also aims to locate these recent Australian grunge (and post-grunge) fictions in the lineage of the Bildungsroman. As a whole, the thesis analyses and aestheticises the contradictions of Bildung under the conditions in which it was written. It cannot, as yet, resolve those contradictions (even aesthetically), but the Bildungsroman teaches us that the only promises we can keep are those that account for the tensions that might break them.
  • ItemOpen Access
    Maximising mill throughput using machine learning techniques and evolutionary algorithms
    (2025) Ghasemi, Zahra; Chen, Lei; Neumann, Frank; Zanin, Max; School of Electrical and Mechanical Engineering
    Grinding is a pivotal step in mineral processing plants, accounting for approximately half of all mineral processing costs. Semi-autogenous grinding (SAG) mills are commonly used for grinding, facilitating mineral liberation and preparing ore for subsequent processing steps, such as flotation. SAG mill throughput is one of the key performance indicators of a SAG mill. Maximising SAG mill throughput is an important objective in mineral processing plants, as it can lead to considerable financial benefits. However, achieving this objective is challenging, as numerous governing variables impact mill throughput. The relationships between these factors and mill throughput are highly nonlinear, and the variables interact with each other, making it difficult to determine the independent effect of each variable. In the current era of artificial intelligence (AI) and machine learning (ML), these methods offer promising solutions to the complexities of mineral processing. This thesis addresses the challenge of maximising SAG mill throughput by leveraging these advanced methods, with a focus on the following four key challenges. First, the baseline for maximising SAG mill throughput is accurately modelling SAG mill throughput. This is challenging due to the high multivariable complexity and interactions among input features. Empirical models for this purpose are costly, timeconsuming, and limited to the conditions under which experiments were performed. There has not been a comprehensive exploration of ML models utilising process data for this purpose. Furthermore, the effect of data delays on prediction accuracy has not been examined in the current literature. This challenge will be addressed in the first step of this thesis. Second, grinding is a highly dynamic process, as many impactful features change over time. Capturing these complexities with a single model is challenging. To address this, a new genetic programming method is developed in the second step of this research. This method generates and utilises multiple equations instead of relying on a single model, thereby improving SAG mill throughput prediction. Third, optimising process parameters to maximise mill throughput presents another important challenge. Control process parameters are typically set based on manufacturers’ recommendations or expert knowledge. However, there is a lack of research on utilising ML and AI to identify optimal process parameters. This challenge is addressed in the third step of this research. Finally, similar to most real-world optimisation problems, there are other objectives to consider alongside SAG mill throughput maximisation. Specifically, balancing throughput maximisation with minimising the circulating load is a significant topic that has not been explored in the literature. This challenge will be addressed in the fourth step of this research by developing a multi-objective optimisation framework. It is worth mentioning that one important contribution of this thesis is the use of extensively accumulated operational data, which is rarely utilised for modelling, to develop an accurate mill throughput prediction model that can be adopted for various types of mineral processing plants. The aim of this research is not merely the development of ML or AI techniques. Instead, it focuses on investigating how these advanced methods can be utilised for enhanced decision-making and process control in the grinding step of mineral processing plants. By aligning these techniques with industrial needs, the research aims to maximise mill throughput, contributing to more efficient, reliable, and optimised plant operations. In this research, industrial datasets from SAG mill and ball mill closed-loop processing circuits are utilised as the foundation for analysis and model development. This thesis consists of four peer-reviewed papers, three of which have been published and one that has been submitted for publication. In the first step towards maximising mill throughput, a comprehensive comparative study is performed to identify the most accurate machine learning model for predicting SAG mill throughput. The compared models include genetic programming, recurrent neural networks, support vector regression, regression trees, random forest regression, and linear regression. A real-world data set consisting of 20,161 records is used for this purpose. In this research, for the first time, the delay in data is identified and incorporated into the preprocessing step by applying the cross-correlation method to increase prediction accuracy. As there are different parameters in each model that can impact prediction accuracy, hyperparameter tuning is conducted to optimise the performance of each model. The obtained results revealed that recurrent neural networks are the best-performing models, followed by genetic programming and support vector regression. The recurrent neural network model is then utilised for sensitivity analysis to identify the effect of different input parameters. The analysis indicated that SAG mill throughput is primarily influenced by the mill turning speed and inlet water. SAG mill throughput is expected to increase with higher mill turning speed and lower inlet water, within the limited operating parameters. In the next stage, an enhanced version of genetic programming, named multiequation genetic programming (MEGP), is developed for more accurate prediction of SAG mill throughput. The reason for focusing on GP is its strong performance, having been ranked second in the previous step, as well as its advantage of transparency in providing equations for prediction. MEGP is comprised of two main steps: clustering and predicting. In the clustering step, different categories of data are identified such that each cluster can be accurately modelled with a distinct equation. These equations are then utilised in the prediction step by applying various prediction approaches. MEGP was implemented using four different distance measures, including Euclidean, Manhattan, Chebyshev, and Cosine distance, to assess the impact of these measures on the model’s performance. MEGP, with the best-performing prediction approach, improved prediction accuracy by 10.74% compared with the standard GP. This approach utilises all extracted equations and includes both the number of data points in each data cluster and the distance to clusters as the weighting factor. Furthermore, the Euclidean distance measure resulted in the highest prediction accuracy among the compared methods. In the third stage, an integrated intelligent framework is developed for maximising SAG mill throughput. This framework comprises an ML-based prediction module and an EA-based optimisation module. Local outlier factor (LOF) and recursive feature elimination (RFE) techniques are utilised for outlier detection and feature selection, respectively, to enhance predictive performance. The results revealed that the implementation of the LOF method for outlier detection did not yield substantial improvements, but utilising RFE showed a positive impact, resulting in the removal of five input features. The problem is modelled as a constrained optimisation problem, considering the working limits of input features and particle size distribution requirements as constraints to ensure that the proposed solutions are practically deployable. Various ML models and EAs are compared to identify the best-performing ones for the utilised dataset. The developed framework is able to propose process set points that will lead to maximised SAG mill throughput. Finally the fourth step extends the framework developed in the previous step by introducing a second objective as minimising the circulating load. Circulating load is the quantity of large particles that cannot pass through the sieve at the SAG mill discharge, as they are larger than the sieve aperture size. By adding this objective, the framework ensures that while mill throughput is maximised, the quality of the grinding process is also considered. For this purpose, the problem is modelled as a multi-objective optimisation problem, and various advanced EAs for multi-objective optimisation are utilised. A comparison of hypervolume values revealed that the Nondominated Sorting Genetic Algorithm II (NSGA-II) is the best-performing optimiser. A sensitivity analysis confirmed the robustness of the optimal solutions proposed by NSGA-II, as changing input feature values around the optimal values could not improve objective values. This extended framework is able to propose grinding process set points that maximise mill throughput and minimise circulating load. Furthermore, the optimal solutions are a set of Pareto-optimal solutions rather than a single solution, enabling process experts to select the best settings based on actual process conditions. It is important to note that while the developed models in the research are tailored to the studied system, they can be continually updated with new data inputs, enabling a dynamic and adaptive approach to process modelling. This adaptability allows the framework to be retrained and adjusted for different systems or evolving operational conditions, enhancing its practical value beyond a static solution. It is also worth mentioning that the differences in the number of data points evaluated in each chapter are due to the use of different datasets provided by the industry partner at various stages of the research. While all datasets originate from the same industrial process, newer versions were made available over time. These updated datasets were more complete or included additional relevant features, which allowed for deeper analysis and model refinement in later chapters.
  • ItemOpen Access
    The Evolution, Implications and Implementation of Australia’s Surveillance Guidelines for Conventional Adenomas and Serrated Lesions of the Colorectum in a Digital Age
    (2025) Ow, Tsai-Wing; Rayner, Chris; Tse, Edmund; Adelaide Medical School
    Introduction: Colorectal cancer (CRC) rates in Australia are currently amongst the highest in the world. Surveillance colonoscopy programs can reduce CRC-related mortality and morbidity through the identification of early cancers and removal of their precursors. In 2018, Australia’s adenoma surveillance guidelines were updated to include the most recent published data. Multiple changes were introduced which aimed to improve the rationalisation of colonoscopy resources, but these substantially increased the complexity of the guidelines. However, the current literature has yet to explore the effects of these changes on resource utilisation and their ability to be adequately implemented. Furthermore, the role of simple digital tools to aid the adoption of complex guidelines is unclear. Aims: The aims of this thesis were: 1. To explore the changes introduced into the recently revised Australian surveillance guidelines, specifically in comparison to those of other similar Western regions (Europe, USA, United Kingdom). 2. To measure the quality of colonoscopy procedures and determine which factors influence these key outcomes measures in Australia. 3. To compare the change in resource requirements between the new and old surveillance guidelines. 4. To determine whether a digital application could significantly improve the ability of users to adhere to the new guideline recommendations. Methods: A literature review was undertaken concerning colorectal cancer and colonoscopy surveillance guidelines in Australia with a focus on their development, guideline adherence, and methods to improve their implementation. The differing approaches to colorectal adenoma surveillance in the West were also compared. To examine the quality of colonoscopy locally, we compiled a retrospective database of colonoscopies across five hospitals and used key performance indicators to benchmark each site, and identify potential areas for improvement. Using the data on adenoma detection from the colonoscopy database, we then modelled both the new and old surveillance guidelines to evaluate the difference in procedural resources required to implement them. Finally, we tested clinicians using simulated scenarios in a randomised cross-over controlled design to evaluate the potential benefit of a digital tool for the determination of guideline-concordant screening and surveillance recommendations. Results: From our retrospective analysis, we found that inadequate bowel preparation occurred in 7.3% of procedures. 97.5% of adequately prepared procedures were successfully completed. The pooled cancer, adenoma, and serrated lesion detection rates were 3.5%, 40%, and 5.9%, respectively. Two out of five hospitals failed to meet currently accepted criteria for quality in colonoscopy in Australia. Significant differences were found in patient ages, work-force composition, and adenoma detection rate between hospitals (P < 0.001). Medical specialists had a higher adenoma detection rate than surgical specialists (OR 1.53, CI 1.21-1.94, P < 0.001). Cancer detection was independently associated with patient age (OR 1.04; P < 0.001) and with procedures performed by surgical specialists (OR 1.85; P = 0.04). However, adenoma detection was independently associated with patient age (OR 1.04; P < 0.001) and female sex (OR 1.89; P<0.001), and with procedures performed by medical specialists (OR 1.41; P = 0.002). The application of the latest guidelines increased the number of surveillance colonoscopies at 1 year (RR 1.57, P = 0.009) and 10 years (RR 3.83, P < 0.001) and reduced the number required at under 1 year (RR 0.08, P = 0.002), 3 years (RR: 0.51, P < 0.001) and 5 years (RR: 0.59, P < 0.001). The overall effect of new guideline adoption was a reduction in surveillance procedures of 21% over 10 years, increasing to 22% when excluding patients who would be aged over 75 years at the recommended surveillance time. A digital application increased the concordance of guideline-concordant screening and surveillance recommendations by 40% (14/18 vs 10/18, P < 0.001). Application usability was found to be moderately correlated with user ability to correctly determine the appropriate guideline recommendation. Conclusions: Although the overall quality of colonoscopy meets Australia’s national quality benchmarks, significant variations in performance between hospitals and between proceduralists of different specialties highlight the need for individual monitoring and that there is room for further improvement. Despite the increased complexity of the newer guidelines, their adoption could result in a 21-22% reduction in colonoscopy resources over 10 years. This would provide additional incentive for their use, especially considering the resource limitations in the public government-funded health sector in Australia. A well-designed digital tool could provide a simple and effective method to aid users in adopting the latest screening and surveillance guidelines.
  • ItemOpen Access
    Domain Generalisation in Reinforcement Learning
    (2025) Orenstein, Adrian; Reid, Ian; Abbasnejad, Ehsan; School of Computer and Mathematical Sciences
    Deep reinforcement learning (RL) aims to learn a general policy for the agent to act in an environment. The policy learns state representations from observations using deep neural networks. As such, deep RL inherently suffers from their defects. For instance, neural networks suffer from shortcut learning in which models latch on to rudimentary input patterns leading to a lack of generalisation. Common approaches to remedy their problems are to (1) obtain larger datasets for better data coverage, (2) train neural networks with more parameters, or (3) include more input variations either by simply adding them to the training set (i.e. data augmentation) or by additionally regularising the objective the information on these variations (i.e. domain information in domain generalisation approaches). However, while these approaches are investigated in supervised learning, their effectiveness remains underexplored in RL. In this thesis, we investigate methods of improving the generalisation capability of on-policy RL agents. We conduct our investigation on a relatively new procedural benchmark named ProcGen (Cobbe et al., 2020), where for a particular game, various levels are procedurally generated - each receiving their own domain label - giving us an ideal platform to investigate domain generalisation methods developed for supervised learning and their efficacy in on-policy RL. In our investigation we find that utilising domain information does indeed result in an improvement in generalisation performance. We apply a supervised learning method, AND-mask (Parascandolo et al., 2021), to a PPO (Schulman et al., 2017) agent and find that when the agent does not get many variations of the same environment to learn from, AND-mask effectively regularises the learned representations and enables the agent to generalise better in novel domains. We also identify a limitation of ANDmask that limits scalability when learning from more training domains. This investigation results in agents that can generalise more effectively when the diversity of training domains are limited, which is beneficial when gathering more diverse data in the real world is costly. Lastly, we explore how model scale, the number of training samples, and the diversity of these training samples contribute to generalisation performance. Our investigation focuses on the onpolicy case in RL where data gathered by the policy is more difficult for deep neural networks to learn from as the data is highly correlated with the policy. In other words, the data gathered by the policy is not independant and identically distributed (i.i.d.), which is often an assumption required for generalisation. Furthermore, as the agent learns and its behaviour is continually changing, the dataset the agent gathers in its experience replay is non-stationary. Given these two complications of non-i.i.d data, and non-stationarity, we observe results that larger models - using a single backbone to extract features - are more effective at generalisation than smaller networks that are decoupled, regardless of the number of domains provided during training.