K E Y N O T E S P E A K E R S
Prof. John Macintyre
Pro Vice Chancellor at the University of Sunderland, United Kingdom
Title: “Is “Big Tech” Becoming the “Big Tobacco” of Artificial Intelligence?”
Recent developments in research, development, implementation and use of AI include worrying trends which ask big questions about the future direction of the whole field. As part of this, the role of “Big Tech” – the huge corporate entities who now dominate the development of AI technologies and products – is crucial, both in terms of the technology they develop, and the researchers they employ. Their dominance places them at the apex of the R&D and product development activity in AI, which in turn means they have a great responsibility to ensure that this activity leads to fair, transparent, accountable, and ethical AI systems and products. They also have a great responsibility to support and nurture their staff. This talk will examine recent developments in AI and the role of Big Tech, and ask whether they are stepping up to these responsibilities.
Professor John MacIntyre is Pro Vice Chancellor at the University of Sunderland. He did his doctorate in Applied Artificial Intelligence in the early 1990s, and went on to establish the Centre for Adaptive Systems which became recognised by the UK Government as a Centre of Excellence in Applied AI. He has published more than 170 papers and given numerous keynote presentations at events around the world. He is the Editor-in-Chief of Neural Computing & Applications, a role he has held since 1996. NC&A publishes peer-reviewed original research on applied AI, receiving over 4,000 submissions in 2020. John is also Co Editor-in-Chief of a new journal, AI and Ethics, which he established with Professor Larry Medsker of George Washington University this year. The first original research and thought leadership pieces were published online in AI and Ethics in October 2020, and the journal is now making a significant contribution to the public debate on the future direction of AI.
Prof. Hojjat Adeli
Ohio State University, Columbus, USA, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) (IEEE), Honorary Professor, Southeast University, Nanjing, China, Member, Polish and Lithuanian Academy of Sciences, Elected corresponding member of the Spanish Royal Academy of Engineering
Title: Machine Learning: A Key Ubiquitous Technology in the 21st Century
Abstract: Machine learning (ML) is a key and increasingly pervasive technology in the 21st century. It is going to impact the way people live and work in a significant way. In general, machine learning algorithms simulate the way brain learns and solves an estimation/recognition problem. They usually require a learning phase to discover the patterns among the available data, similar to the humans. An expanded definition of ML is advanced as algorithms that can learn from examples and data and solve seemingly interactable learning and unteachable problems, referred to as ingenious artificial intelligence (AI). Recent and innovative applications of ML in various fields and projects currently being pursued by leading high-tech and industrial companies such as Boeing, Google, IBM, Uber, Baidu, Facebook, and Tesla are reviewed. Then, machine learning algorithms developed by the author and his associates are briefly described. Finally, examples are presented in different areas from health monitoring of smart highrise building structures to automated EEG-based diagnosis of various neurological and psychiatric disorders such as epilepsy, the Alzheimer’s disease, Parkinson’s disease, and autism spectrum disorder.
Short Bio: Hojjat Adeli received his Ph.D. from Stanford University in 1976 at the age of 26. He is currently an Academy Professor at The Ohio State University where he held the Abba G. Lichtenstein Professorship for ten years. He is the Editor-in-Chief of the international journals Computer-Aided Civil and Infrastructure Engineering which he founded in 1986 and Integrated Computer-Aided Engineering which he founded in 1993. He has also served as the Editor-in-Chief of the International Journal of Neural Systems since 2005. He has been an Honorary Editor, Advisory Editor, or member of the Editorial Board of 144 research journals. He has authored over 600 research and scientific publications in various fields of computer science, engineering, applied mathematics, and medicine, including 16 ground-breaking high-technology books. He is the recipient of over sixty five awards and honors including five Honorary Doctorates, and Honorary Professorship at several Asian and European Universities. He is a member of Academia Europaea, a corresponding member of the Spanish Royal Academy of Engineering, a foreign member of Lithuanian Academy of Sciences and Polish Academy of Science, a Distinguished Member of American Society of Civil Engineers (ASCE), and a Fellow of AAAS, IEEE, AIMBE, and American Neurological Association. He was profiled as an Engineering Legend in the journal Leadership and Management in Engineering, ASCE, April 2010, by a noted biographer of legendary engineers.
Prof. Antonis Argyros
Professor and Chair, Computer Science Department, University of Crete
Researcher, Foundation for Research and Technology – Hellas (FORTH)
Title: Human-Centered Computer Vision: Core Components and Applications
Abstract: Computer vision is an area of artificial intelligence aimed at developing technical systems capable of perceiving the environment through image and video processing and analysis. In this talk, we mainly focus on human-centered computer vision, that is, computer vision for capturing aspects of human presence such as the geometry and motion of the human body, as well as for recognizing human actions, behavior, intentions and emotional states. Such technologies may constitute a fundamental building block for the development of a variety of applications in almost all aspects of human life (health, security, work, education, transportation, entertainment, etc.). In this special area, we give specific examples of our research activity and highlight the significant boost achieved due to the exploitation of state of the art machine learning techniques and deep neural networks. We also give examples of applications developed based on these technologies in the field of robotics and ambient intelligence environments.
Short Bio: Antonis Argyros is a Professor of Computer Science at the Computer Science Department (CSD), University of Crete (UoC) and a researcher at the Institute of Computer Science (ICS), Foundation for Research and Technology-Hellas (FORTH) in Heraklion, Crete, Greece. His current research interests fall in the areas of computer vision and pattern recognition, 3D reconstruction, image motion and tracking, with emphasis on human body pose and shape analysis and recognition of human activities and gestures. He is also interested in applications of computer vision in the fields of robotics and smart environments. In these areas, he has published more than 180 papers in scientific journals and refereed conference proceedings, and has delivered several invited and keynote talks in international events, universities and research centers. Antonis Argyros has served as a general co-chair of ECCV’10, as a Program Co-chair of IEEE FG’20, ICVS’19, as a co-founder and co-organizer of the HANDS’15, ’17, ’18, ’19 series of workshops, and as an Area Chair/Area Editor/Associate Editor of several editions for top vision, robotics and signal processing conferences (ICCV, ECCV, BMVC, ICPR, ICRA, IROS, EUSIPCO). He serves as a member of the Advisory Board of the IET Image Processing journal and as an Area Editor for the Computer Vision and Image Understanding Journal (CVIU). He has served as a member of the Editorial Board of the IEEE Robotics and Automation Letters journal, as a reviewer in more than 35 journals and as a TPC member of more than 70 conferences in computer vision, computer graphics, robotics and related disciplines. Since 1999, Antonis Argyros he has been involved in more than 30 European and national RTD projects on computer vision, pattern recognition, image analysis and robotics.
Prof. Peter Tino
School of Computer Science, University of Birmingham, UK
Title: Unveiling Recurrent Neural Networks – What Do They Actually Learn and How?
Abstract: When learning from “dynamic” data where the order in which the data is presented does matter, the key issue is how such temporal structures get represented within the learning machine. In the case of artificial neural networks, an often-adopted strategy is to introduce feedback-connections with time delays. This enables the neurons to form their activation patterns based on the past, as well as the current neural activations. Neural networks of this kind became known as Recurrent Neural Networks (RNN). Many diverse architectures fall under this umbrella, with a wide variety of application domains. We will briefly review past attempts to understand the way RNNs learn to represent the past in order to perform the tasks they are trained on.
To that end, we will adopt the general view of RNNs as parameterized state space models and input driven non-autonomous dynamical systems. We will then present some new results connecting RNNs to a widely known class of models in machine learning – kernel machines. In particular, we will show that RNNs can be viewed as “temporal feature spaces”. This framework will enable us to understand how high-dimensional RNNs constructed with very few degrees of freedom in their parameterization can still achieve competitive performances. Such observations can be viewed as “dynamical analogs” to classical “static” kernel machines that often achieve excellent performance using rich feature spaces constructed with very few degrees of freedom (e.g. single scale parameter in Gaussian kernels).
Short Bio: Peter Tino holds a Chair position in Complex and Adaptive Systems at the School of Computer Science, University of Birmingham, UK. His interests span machine learning, neural computation, probabilistic modelling and dynamical systems. Peter is fascinated by the possibilities of cross-disciplinary blending of machine learning, mathematical modelling and domain knowledge in a variety of scientific disciplines ranging from astrophysics to bio-medical sciences.
He has served on editorial boards of a variety of journals including IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, Scientific Reports, and Neural Computation and (co-)chaired Task Force on Mining Complex Astronomical Data and Neural Networks Technical Committee (TC of IEEE Computational Intelligence Society). Peter led an EPSRC-funded consortium of six UK universities on developing new mathematics for personalised healthcare. He was a recipient of the Fulbright Fellowship to work at NEC Research Institute, Princeton, USA, on dynamics of recurrent neural networks, UK–Hong-Kong Fellowship for Excellence, three Outstanding Paper of the Year Awards from the IEEE Transactions on Neural Networks and the IEEE Transactions on Evolutionary Computation, and the Best Paper Award at ICANN 2002.
Please follow the link below to find out more about Professor Tiňo’s work:
Prof. Dr.ir. Johan Suykens
KU Leuven, ESAT-Stadius and Leuven.AI Institute
Title: Deep learning and Kernel Machines
Abstract: Over the last decades, with neural networks and deep learning, several powerful architectures have been proposed, including e.g. convolutional neural networks (CNN), stacked autoencoders, deep Boltzmann machines (DBM), deep generative models and generative adversarial networks (GAN). On the other hand, with support vector machines (SVM) and kernel machines, solid foundations in learning theory and optimization have been achieved. Within this talk, we outline a unifying picture and show several new synergies, for which model representations and duality principles play an important role. A recent example is restricted kernel machines (RKM), which connects least squares support vector machines (LS-SVM) to restricted Boltzmann machines (RBM). New developments on this will be shown for deep learning, generative models, multi-view and tensor based models, latent space exploration, robustness and explainability.
Short Bio: Johan A.K. Suykens was born in Willebroek Belgium, May 18 1966. He received the master degree in Electro-Mechanical Engineering and the PhD degree in Applied Sciences from the Katholieke Universiteit Leuven, in 1989 and 1995, respectively. In 1996 he has been a Visiting Postdoctoral Researcher at the University of California, Berkeley. He has been a Postdoctoral Researcher with the Fund for Scientific Research FWO Flanders and is currently a full Professor with KU Leuven. He is author of the books “Artificial Neural Networks for Modelling and Control of Non-linear Systems” (Kluwer Academic Publishers) and “Least Squares Support Vector Machines” (World Scientific), co-author of the book “Cellular Neural Networks, Multi-Scroll Chaos and Synchronization” (World Scientific) and editor of the books “Nonlinear Modeling: Advanced Black-Box Techniques” (Kluwer Academic Publishers), “Advances in Learning Theory: Methods, Models and Applications” (IOS Press) and “Regularization, Optimization, Kernels, and Support Vector Machines” (Chapman & Hall/CRC). In 1998 he organized an International Workshop on Nonlinear Modelling with Time-series Prediction Competition. He has served as associate editor for the IEEE Transactions on Circuits and Systems (1997-1999 and 2004-2007), the IEEE Transactions on Neural Networks (1998-2009), the IEEE Transactions on Neural Networks and Learning Systems (from 2017) and the IEEE Transactions on Artificial Intelligence (from April 2020). He received an IEEE Signal Processing Society 1999 Best Paper Award, a 2019 Entropy Best Paper Award and several Best Paper Awards at International Conferences. He is a recipient of the International Neural Networks Society INNS 2000 Young Investigator Award for significant contributions in the field of neural networks. He has served as a Director and Organizer of the NATO Advanced Study Institute on Learning Theory and Practice (Leuven 2002), as a program co-chair for the International Joint Conference on Neural Networks 2004 and the International Symposium on Nonlinear Theory and its Applications 2005, as an organizer of the International Symposium on Synchronization in Complex Networks 2007, a co-organizer of the NIPS 2010 workshop on Tensors, Kernels and Machine Learning, and chair of ROKS 2013. He has been awarded an ERC Advanced Grant 2011 and 2017, and has been elevated IEEE Fellow 2015 for developing least squares support vector machines. He is currently serving as program director of Master AI at KU Leuven.
Prof Nikola Kasabov
Fellow IEEE, Fellow RSNZ, Fellow INNS College of Fellows,
Professor of Knowledge Engineering and Founding Director KEDRI Auckland University of Technology, Auckland, New Zealand
George Moore Chair/Professor, University of Ulster, UK,
Honorary Professor Teesside University UK and the University of Auckland, NZ
Title: Brain-Inspired Data Analytics for Incremental and Transfer Learning of Cognitive Spatio-Temporal Data and for Knowledge Transfer
Abstract: The talk argues and demonstrates that brain-inspired spiking neural network (SNN) architectures can be used for incremental and transfer learning, i.e. to learn new data and new classes/tasks/categories incrementally utilising some previously learned knowledge. Similarly to how the brain manifests transfer learning, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles. The new learned knowledge can be extracted in forms of graphs and symbolic fuzzy rules and its evolution traced over time. The presented approach is illustrated on an exemplar brain-inspired SNN architecture NeuCube (free software and open source available from www.kedri.aut.ac.nz/neucube and from www.neucube.io). The extraction of symbolic rules from NeuCube at each learning tasks and each subject allows for knowledge transfer between humans and machines in an adaptive, evolving, interactive way. This opens the field to build new types of open and transparent BCI and AI systems. More details can be found in: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.
Short Bio: Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of KEDRI and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology. Kasabov is a Past President of the Asia Pacific Neural Network Society (APNNS) and the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neurosystems and Editor-in-Chief of the Springer journal Evolving Systems. He is Associate Editor of several international journals. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 650 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: George Moore Chair in Data Analytics at the University of Ulster; Professor at the University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University; Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor at the University of Auckland and Teesside University. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award; RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found on the web site: https://academics.aut.ac.nz/nkasabov
Université de Reims Champagne-Ardenne, CReSTIC/MODECO
Title: How can Artificial Intelligence efficiently support Sustainable Development?
Abstract: This talk considers the multiple role AI may play in sustainability. Actually, sustainable development is among the greatest challenges for humanity. Sustainability and development are apparently opposite. The current efforts to face the Planet Crisis by separate actions generate less impact than expected. Artificial Intelligence approaches and capacity of available technologies are underexplored. Eco-innovation actions focus mainly on smart transportation, smart use of energy and water and waste recycling but do not consider the necessary evolution of behaviors and focus. The trendy Digital transformation follows mostly traditional approaches. The concepts such as Smart, Intelligent, Innovative, Green or Wise City invented to promote existing technology transform the IT market. Most of offers consist in data processing with statistical/optimization methods. But AI can do better – the AI approaches and techniques combined with adequate thinking may help innovating the way of facing Planet Crisis.
Short Bio: Eunika Mercier-Laurent is electronic engineer, PhD in computer science, expert in artificial intelligence, associate researcher with University of Reims Champagne Ardennes and Professor at EPITA International Masters and SKEMA.She has over 15 years of involvement with IFIP including the Chair position of Technical Committee 12 on Artificial Intelligence since 2019 and Chair of WG 12.6 (AI for Knowledge Management). She was elected representative of TC12 in France in 2018.Her teaching and MOOC includes Knowledge Management & Innovation powered by AI, Ethical Development of AI Systems, Innovation Ecosystems and Innovation Week Challenges. After working as researcher in INRIA, computers designer and manager of innovative AI applications with Groupe Bull, she founded Global Innovation Strategies devoted to all aspects of Knowledge Innovation. Among her research topics are: Knowledge and Eco-innovation Management Systems, methods and techniques for innovation, knowledge modelling and processing, complex problem solving, AI for sustainability, eco-design and impacts of artificial intelligence. She is President of Innovation3D, International Association for Global Innovation, expert for EU programs, member of Managing Body of the EU K4I (https://www.knowledge4innovation.eu) and author of over hundred scientific publications and books. Among the last “The Innovation Biosphere, Planet and Brains in Digital Era” and Intelligence in energy (co-authored with G. Kayakutlu).
Research interests: Artificial Intelligence, Knowledge Management, Complex Systems, Innovation Ecosystems
Jose C. Principe
University of Florida
Title: Backpropagation Free Deep Learning
Abstract: This talk presents recent results that show the feasibility of training deep networks classifiers without backpropagation. We will prove that it is possible to substitute error propagation in general conditions and practically achieve the same performance as conventional algorithms. This methodology allows modularization of the algorithmic pipeline and improves explainability. We will then address some of the benefits of this technology for applications.
Short Bio: Jose C. Principe (M’83-SM’90-F’00) is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida where he teaches advanced signal processing, machine learning and artificial neural networks (ANNs) modeling. He is Eckis Endowed Professor and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL) www.cnel.ufl.edu. The CNEL Lab has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information).
Dr. Principe is an IEEE, IABME, AIMBE Fellow. He was awarded the IEEE Neural Network Pioneer Award, the IEEE Shannon Nyquist Technical Achievement Award from the Signal Processing Society, the EMBS Career Achievement Award, and the Teacher Scholar of the Year from the U. of Florida. He was the past Chair of the Technical Committee on Neural Networks of the IEEE Signal Processing Society, Past-President of the International Neural Network Society, and Past-Editor in Chief of the IEEE Transactions on Biomedical Engineering. Dr. Principe has more than 800 publications. He directed 102 Ph.D. dissertations and 65 Master theses. He wrote in 2000 an interactive electronic book entitled “Neural and Adaptive Systems” published by John Wiley and Sons and more recently co-authored several books on “Brain Machine Interface Engineering” Morgan and Claypool, “Information Theoretic Learning”, Springer, and “Kernel Adaptive Filtering”, Wiley.