OCTAI Workshop 1.1 | Listening to Practitioners (Asia and Europe)
Presenters
Kane Wu
Kane is the founder of ThinkCol which is a AI/Machine Learning consultancy based in Hong Kong. Through the 8 years of establishment, Kane has experience working with numerous industries and government projects. Through this talk, Kane will share the different projects he has done recently in retail, media, public sector and finance as well as to discuss the technical aspects of AI. Kane also will share where he sees AI might be going for companies and problems that his clients faces.
Jianyang Lum
Jianyang Lum leads the ML team at BotMD, a Singapore-based YC startup that aims to provide chatbot solutions in healthcare to help doctors and patients get information quickly. Previously, he spent 4+ years in ML engineering at Pinterest in the San Francisco Bay Area, doing recommender systems for Ads (e.g. transformer models for shopping, user interest modelling, privacy, candidate generation, building targeting products). He has fond memories of Oxford, having spent a bit of time there as an undergraduate on exchange.
Many of the algorithms we use today (search, LLM question/answering, what pops up on your Google feed) have retrieval as their core, fundamentally answering the question, “how do you find potential needle candidates in the haystack?”. This talk will give a brief overview of what retrieval is (from the speaker’s experience in both large and small companies), the breadth of problems solved using retrieval, while touching on some issues revolving around retrieval and why retrieval can pose ethical dilemmas.
Prof. Nigel Crook
Efficient brain-inspired machine learning
Biological neurons in the brain communicate with each other by emitting electrical pulses or ‘spikes’. It is thought that neurons encode information using spikes in two distinct ways: rate-based and temporal coding. Rate-based coding communicates through the frequency of spikes emitted by a neuron, enabling it to transmit information on a continuous-valued, low to high scale. Temporal coding, on the other hand, communicates information through the relative timing of spikes. Virtually all modern machine learning algorithms use an approximation to rate-based coding with highly simplified models of non-spiking neurons and synapses (Artificial Neural Networks). It has been shown, however, that temporal coding has, in principle, a significantly higher information carrying capacity than rate-based coding and can be implemented using much more efficient event-driven algorithms, thereby substantially reducing the energy consumption of AI algorithms; an issue that has been the cause of much concern recently. The drawback of temporal coding, though, is that it is more challenging to develop algorithms that can learn to decode the information embedded in the temporal structure of spikes. In this talk I will discuss how our research brings together recent discoveries in neuroscience that are inspiring new and potentially much more efficient machine learning algorithms.
Dr. Nunung Nurul Qomariyah
Dr. Nunung Nurul Qomariyah is an Assistant Professor at Bina Nusantara University International, Jakarta, specializing in Artificial Intelligence and Explainable AI in healthcare. She earned her Ph.D. from the University of York, UK, where her thesis focused on preference learning using Description Logic (DL) for structured knowledge representation and intelligent decision-making. Her existing research centers on AI-driven decision support for radiology, integrating medical imaging and clinical notes to enhance diagnostic accuracy. She led a Newton British Council–Indonesian Ministry of Research and Education co-funded project in collaboration with University of York, Binus University, and Pasar Minggu Regional Hospital, resulting in Scopus-indexed publications and international recognition, including 1st Place in the Mendix AI Innovation Competition and Top 20 in IMERI’s Open Innovation Startup Competition. Beyond academia, Dr. Nurul is the Chair of SheCodes Society, where she leads initiatives to empower women in technology, digital leadership, and AI education for young generation. She oversees programs, partnerships, and community engagement to support women in tech careers, fostering inclusive opportunities in AI and digital innovation.
My research focuses on developing AI models that integrate medical image analysis with clinical notes to assist radiologists in diagnosis and treatment planning. While the technical challenges are significant, ethical and human-centered concerns have proven equally critical, particularly regarding clinician trust and data access. A major challenge was obtaining high-quality medical data, as many radiologists were hesitant to share information due to concerns that AI might replace their expertise. Others doubted AI’s ability to fully understand medical diagnosis beyond pattern recognition. To build trust, I engaged directly with radiologists, demonstrating AI’s role as a collaborative decision-support tool rather than a replacement. Beyond data access, ethical concerns such as bias, transparency, and accountability emerged. AI models trained on imbalanced datasets risk reinforcing health disparities, while deep learning’s opacity raises concerns about interpretability. Additionally, AI’s reliance on clinical notes introduces risks of misinterpreting physician intent. Privacy and data security remain critical, especially given regulatory variations across healthcare systems. Cultural and social factors further shape these challenges. While AI ethics discussions are often Western-centric, concerns about job displacement, clinician trust, and regulatory uncertainties vary across different healthcare systems. Addressing these complexities is essential for the responsible integration of AI in clinical practice.
Dr. Jakob Zeitler
At Oxford University, Jakob Zeitler's research interests include making machine learning, causal inference and Bayesian optimisation work in the real world. Other interests include simplifying and explaining artificial intelligence and its impacts on our lives, e.g. in healthcare, drug development, chemistry and more.
Can We Build Trust in Artificial Intelligence? In my talk, I will discuss how the consequences of decision-making are crucial for how we build trust, in humans or machines. Causality plays a crucial role and has been extensively studied using mathematical tools over the last decades. I will first set the context and fundamentals for decision making, introduce causal inference as a tool to assess decision making in healthcare, finance and beyond, and then conclude with open questions.