Wednesday November 27th

09:30 - 10:30



GrAI Matter Labs and Western Sydney University


"From Neuroscience to New Computing Devices:  The Benefits and Pitfalls of Neuromorphic Engineering"

Abstract: Neuromorphic engineering was launched as a field by Carver Mead in the early 1980s, with the expressed intention of using electronics to simulate the human brain and senses.  It was intended to validate and build intuition about neural systems, and to explore new ideas through the relatively rigorous method of seeing whether they would work when physically instantiated.  Despite this scientific purpose, the field always held some promise as a source of new ideas for commercial electronic devices, encouraged by the low power and versatile performance of the human brain.  A focus on analog neurons and spike-based computation kept the methods from being widely adopted, particularly during the several decades in which Moore’s Law ensured that conventional digital electronics improved in performance at a pace much faster than that which an emerging technology could match. 

The eventual breakdown of Moore’s Law in the last few years has triggered a fresh interest in neuromorphic engineering as a commercial computing technology, and several companies (including Intel, Qualcomm and Samsung) have research efforts in this area.  However, the use of neuromorphic technologies still requires significant paradigm shifts on the part of the users, and adoption is still uncertain. 

In this keynote presentation, we examine the reasons why neuromorphic engineering may offer improved performance in the post-Moore’s-Law era. We show that the benefits derive from combining event-based processing, spike communication, network computation, asynchronicity, and stochastic or probabilistic operations.  Each of these features brings benefits and incurs disadvantages, and each requires a different application paradigm than conventional synchronous digital systems.  We will describe a new computing architecture that implements some of these features in a way that incurs the least disruptive adaptation from conventional practice, and illustrate how it delivers the benefits of neuromorphic engineering in modern computational applications.  We will conclude by describing two ICs that have been designed using this architecture, and give some results on standard power and performance benchmarks.

Jonathan Tapson was born in Zimbabwe in 1966 and attended the University of Cape Town, where he obtained Physics and Electrical Engineering undergraduate degrees, and a PhD in Engineering in 1994.  He spent his early career doing sensor and instrumentation research, with a strong industrial focus.  During this time he spun out three companies from his research, all of which are successful today.  He became interested in Neuromorphic Engineering in 2004 and has since focused his research on neuromorphic sensors, computation, and algorithms.  He moved to Western Sydney University in 2011 as a Professor of Engineering, and was the Director of WSU’s MARCS Institute for Brain, Behaviour and Development from 2014-2016.  In 2019 he became the Chief Scientific Officer of GrAI Matter Labs, a new company doing AI hardware acceleration. Prof Tapson has authored over 170 peer-reviewed papers and a dozen patents.


Thursday November 28th

11:00 - 12:00



NetS3 Laboratory - Fondazione Istituto Italiano di Tecnologia (IIT)


"The emerging era of active neuroelectronic devices for brain interfaces"

Abstract: Progress in neurotechnologies to monitor and perturb neural activity in the nervous system is one of the main drivers of discoveries in neuroscience as well as the substrate for innovation in biomedical applications.

Among these neurotechnologies, electrodes remain among the most efficient transducers of neural activity. By establishing an electronic-ionic interface enabling to sense the broad frequency range of bioelectrical brain signals and to actuate neural activity, electrode-based devices are widely used in neuroscience research but also in a range of biomedical applications that target the nervous system. For instance, this includes platforms for in-vitro pharmacological screenings and clinical devices for diagnostics, bioelectric therapies or prosthetics.

However, as a matter of fact, how brain functions are implemented and executed still remain poorly understood, thus limiting our current capabilities of intervention and interpretation of brain signals. To face this challenge, it is widely recognized that new generations of neurotechnologies able to access a massive number of single-neurons are urgently required.

In this lecture, we will review an emerging approach that uses CMOS technology to realize monolithic devices integrating dense and large arrays of thousands of microelectrodes together with low power front-end and read-out circuits. In particular, our approach is based on the Active Pixels Sensor (APS) circuit architecture that was originally demonstrated for in vitro and ex vivo applications. Recently, we proposed a similar APS approach to realize implantable probes (or SiNAPS-probes) with a modular and scalable circuit solution for monitoring the electrical activity of the brain, in vivo, from entire large arrays of closely spaced electrode-pixels. As it will be discussed, the adoption of active probes rather than conventional passive multielectrode arrays based on thin-film processes sets a new age for neuroelectronics, but also introduces new challenges that need to be addressed at the system level, such as scalability, data dimensionality or power consumption.

Luca Berdondini was born in 1974, in Locarno (Switzerland). He currently leads the NetS3 Laboratory at the Fondazione Istituto Italiano di Tecnologia (IIT), an interdisciplinary research laboratory launched in 2008 to develop innovative CMOS-based neuroelectronic technologies for research in neuroscience and biomedical applications. Luca Berdondini received in 1999 a M.Sc. degree in microengineering from the Swiss Federal Institute of Technology of Lausanne (EPFL) with a Master Thesis at Caltech (USA). In 2003 he received a PhD on nano-/micro-fabricated interfaces and CMOS devices for electrophysiology (Samlab, EPFL, Switzerland). He is among the pioneers of CMOS-based multielectrode arrays for high-resolution electrophysiology, co-founder of 3Brain AG (Switzerland) and he has contributed to >70 publications and several patents.


Friday November 29th

11:00 - 12:00



Director of AI research at NVIDIA and an Assoc. Prof at the Gonda Brain Institute at Bar-Ilan University




Abstract:  Deep models are successful in a wide range of perception problems, but often fall short when compared to how people understand, reason and explain what they see. To train models that can reason and communicate about complex visual scenes, they need to combine information from two very different modalities: perception and knowledge. As a result, their architecture should integrate continuous dense models that process visual information, with sparse structured models that process knowledge and language. These challenges can be addressed using new representations that are both structured and differentiable, making it possible to train structured networks efficiently. I will discuss recent advances in deep learning for developing such representations, and what these mean for designing architectures and systems of networks that can reason  and communicate about what they perceive.

Gal Chechik is a director of AI research at NVIDIA and an Assoc. Prof at the Gonda Brain Institute at Bar-Ilan University. His research spans learning in brains and machines, including large-scale learning algorithms for machine perception, and analysis of representation changes in mammalian brains. In 2018 Gal joined NVIDIA as a director of AI research, leading nvidia's research in Israel. Prior to that, Gal was a staff research scientist at Google Brain and Google research developing large-scale algorithms for machine perception, used by millions daily. Gal earned his PhD in 2004 from the Hebrew University, developing computational methods to study neural coding, and did his postdoctoral training at Stanford CS department. Since 2009, he heads the computational neurobiology lab at the Gonda center of Bar Ilan University. Gal authored more than 100 refereed publications and patents, including publications in Nature Biotechnology, Cell and PNAS.