:::Speakers :::
Top 3 Keynote Presentation
Benefits and Pitfalls of Neuromorphic Engineering
In this keynote presentation, we examine the reasons why neuromorphic engineering may offer improved performance in the post-Moore’s-Law era. We show that the benefits derive from combining event-based processing, spike communication, network computation, asynchronicity, and stochastic or probabilistic operations.
Each of these features brings benefits and incurs disadvantages, and each requires a different application paradigm than conventional synchronous digital systems. We will describe a new computing architecture that implements some of these features in a way that incurs the least disruptive adaptation from conventional practice, and illustrate how it delivers the benefits of neuromorphic engineering in modern computational applications. We will conclude by describing two ICs that have been designed using this architecture, and give some results on standard power and performance benchmarks.
Active Neuroelectronic devices for brain interfaces
In this lecture, we will review an emerging approach that uses CMOS technology to realize monolithic devices integrating dense and large arrays of thousands of microelectrodes together with low power front-end and read-out circuits. In particular, our approach is based on the Active Pixels Sensor (APS) circuit architecture that was originally demonstrated for in vitro and ex vivo applications.
Recently, we proposed a similar APS approach to realize implantable probes (or SiNAPS-probes) with a modular and scalable circuit solution for monitoring the electrical activity of the brain, in vivo, from entire large arrays of closely spaced electrode-pixels. As it will be discussed, the adoption of active probes rather than conventional passive multielectrode arrays based on thin-film processes sets a new age for neuroelectronics, but also introduces new challenges that need to be addressed at the system level, such as scalability, data dimensionality or power consumption.
DEEP UNDERSTANDING FOR PERCEPTION
Deep models are successful in a wide range of perception problems, but often fall short when compared to how people understand, reason and explain what they see. To train models that can reason and communicate about complex visual scenes, they need to combine information from two very different modalities: perception and knowledge.
As a result, their architecture should integrate continuous dense models that process visual information, with sparse structured models that process knowledge and language.
These challenges can be addressed using new representations that are both structured and differentiable, making it possible to train structured networks efficiently. I will discuss recent advances in deep learning for developing such representations, and what these mean for designing architectures and systems of networks that can reason and communicate about what they perceive.