D

Dr Regan Andrews: Academic Research Hub

I am Dr. Regan Andrews (BIT, MPP, DComp), an academic and researcher working at the intersection of artificial intelligence, data science, and institutional IT ecosystems. My work is driven by a single throughline: building AI systems that are interpretable, ethical, and truly aligned with human meaning. I completed my Doctor of Computing in AI/Data Science (with Distinction) at Unitec Institute of Technology in 2021, with a doctoral thesis on applied AI in enterprise IT ecosystems (hdl.handle.net/10652/5394). Prior to that, I earned a Master of Professional Practice (Distinction, 2018) and a Bachelor of Information Technology (2016) at Otago Polytechnic. My academic research centers on: - AI Interpretability & Explainability – building models that reveal their reasoning in human-legible ways. - Generative Compression (LPGC) – inventing compression methods that preserve semantic meaning, not just pixels. -AI Ethics in Deployment – frameworks for safe, transparent AI in public and institutional systems. - Applied AI for Education & Infrastructure – ensuring AI enhances, rather than replaces, human decision-making. - Data Governance & Systemic Transparency – accountability for how systems transform and transmit information. I am currently developing several papers in collaboration with colleagues across AI and information systems, including work on semantic fidelity in compression, toolkits for causal transparency in ML pipelines, and governance frameworks for public-sector AI. My projects aim to cross disciplinary boundaries: from technical codec design to narrative-based teaching of algorithmic ethics. Beyond publications, I hold provisional patents for two applied AI prototypes: LPGC (Layered Perceptual Generative Compression) – an AI-based video/audio codec designed for human-perceived fidelity. AWFS (Adaptive Workflow Forecasting System) – a predictive ML engine for enterprise bottleneck analysis. I also have a long career in professional IT practice (2001–2024), spanning roles at the University of Auckland, Air New Zealand, Vodafone, IAG, Spark, and Ports of Auckland, as well as consulting and training across Australia and New Zealand. My experience bridges industry implementation with academic theory—an integration I see as crucial to keeping AI research grounded in lived systems. Teaching is central to my practice. I design and deliver interactive, outcomes-based training in AI, IT service management, CRM, and institutional systems, and I plan future courses on Human-Centered AI, Compression Theory, and AI in Institutional Ecosystems. I maintain a public teaching project, the Explainer Journal, which explores AI ethics and system transparency for general audiences, and I'm the creator of the Unified Doctor project, where narrative is used as a tool for teaching and interrogating AI & ethics through Sci-Fi. Currently, I am preparing for academic roles in Hong Kong, with a focus on cross-border collaboration in AI ethics, data governance, and applied institutional AI. I am learning Cantonese and Mandarin to better integrate into the academic and cultural context of the region. 📧 scholar@quietsystems.net 📱 +64 (21) 402 435

Paradox-Aware Reinforcement Learning for Closed-Loop Time Series Data

Abstract Reinforcement learning (RL) agents in sequential decision-making face several fundamental dilemmas or "paradoxes" that hinder real-world deployment. This paper provides a comprehensive analysis of paradox-aware reinforcement learning in the context of closed-loop time series systems. We focus on key theoretical challenges – such as the exploration-exploitation dilemma, the temporal credit assignment problem, the simulation-to-reality gap, and distributional shift – and examine how thes...
Read post

Encoding Memory-Loss Events as Suppressed Data Structures in AI Models

Abstract: Artificial intelligence (AI) systems must balance memory retention with the ability to forget irrelevant or harmful information. This paper provides a comprehensive overview of how memory-loss events – instances of forgetting or suppressing stored information – can be modeled in AI through specialized data structures and algorithms. We integrate theoretical and applied perspectives from deep learning (e.g. catastrophic forgetting in neural networks), transformer architectures (e.g. att...
Read post

Narrative Anchoring as a Model for Paradox-Free Time Travel in AI Simulations

Introduction Time travel to the past traditionally raises famous logical paradoxes in physics and fiction, the most well-known being the “grandfather paradox.” This paradox asks: what happens if a time traveler goes back and prevents their own grandparents from meeting, thus preventing the time traveler’s birth? Such a self-contradictory causal loop threatens the consistency of reality. Stephen Hawking humorously illustrated the issue by hosting a party for time travelers, sending the invitatio...
Read post

Causal Graph Neural Networks for Temporal Consistency in AI Predictions

Abstract Artificial intelligence (AI) models that make sequential or time-dependent predictions often struggle with temporal consistency – the requirement that outputs vary smoothly and logically over time without erratic jumps. Abrupt or incoherent changes in predictions can degrade performance in applications from video analysis to time-series forecasting. This paper surveys recent advances in Causal Graph Neural Networks (GNNs) as a promising approach to enforce temporal consistency in AI pr...
Read post