There have been few more profound, transformative technologies to hit the world than artificial intelligence (AI). AI and its subset machine learning (ML) represent not only a fundamental shift in how consumers engage with technology now and into the future, they constitute an entirely new approach to system design.
Billions of people already use AI, whether
they realize it or not. Streaming video and music suggestions are powered by
AI; health care and wellness apps deliver better user experiences based on AI
and ML algorithms. And the invocations for smart speakers from companies such
as Google or Amazon might want to be triggered by the phrase “hey AI!”
These solutions are being developed in a way that’s vastly different from traditional methods of design innovation. Traditional code is written sequentially, line by line, as the developer essentially tells the computer precisely what to do in a given circumstance.
But ML, for example, is deployed as models, created by frameworks that learn and execute on their own. Models, therefore, act like newborn babies – you can never really be sure of just what you will be getting until they arrive.
This functionality is already widely
deployed today, even as the hardware landscape evolves as companies develop
approaches tailored to one style of AI or another, one hardware platform or
another. Billions of smart phones already use ML algorithms, and more than 40
percent of AI/ML systems currently use the CPU as the main execution engine.
But the landscape is changing rapidly. As such, it’s crucial to stay on top of the latest research, thinking and design approaches to AI systems. To do that, look no farther than Arm TechCon 2019, Oct. 8-10, at the San Jose Convention Center. There, as part of a vibrant and compelling technical program, AI and ML loom large this year.
Here are five must-see sessions, in chronological order, that offer attendees deep and current insights on AI and ML design:
Vision At the Edge
Developers are leveraging machine learning at the edge and endpoints to optimize object detection in industrial, automotive and consumer markets. Deep neural networks are vastly expanding its object detection’s applicability and usage. Arm Senior Product Manager Ravi Mahatme will discuss techniques to implement object detection at the edge on Arm Neural Processing Units (NPU), and he will contrast object detection performed on NPUs vs. CPUs.
Vision At the Edge: Object Detection using Deep Neural Networks on Arm NPU,Tuesday, Oct. 8, 11:30 a.m., Executive Ballroom 210A.
From Device to Cloud
Relentless improvements in compute efficiency are changing how and where we allocate compute cycles for things like AI, which was once confined to big data servers in vast server farms with vast computing horsepower. Today, as distributed computing moves to the edge and onto endpoints, we are changing how and where we locate AI and ML workloads.
Arm Senior Product
Marketing Manager Tim Hartley will talk about the challenges of building
scalable applications that can seamlessly extract meaningful insight from
multiple devices, augmented by cloud inference when required, all whilst using
real examples from retail and security.
From Device to Cloud: Building Real-World Distributed ML Applications, Wednesday, Oct. 9, 9 a.m., Executive Ballroom 210G.
Are Neural Networks a Model for the Human Brain?
What happens when an AI expert sits down to dinner with his wife, who happens to be a neuroscientist? Some amazing insights can emerge. Ian Bratt, Arm Fellow and senior director of technology, will deliver a keynote that revolves around the singularity, the moment in the future in which AI and machine learning become smarter than humans. Just how far off is singularity, and what has it really got to do with current ML problems? Ian will offer insights and technology directions that show just how close biological brain functions are to AI compute.
Are Neural Networks a Model for the Human Brain? Wednesday, Oct. 9, 10:20 a.m., Grand Ballroom 220A.
Intelligent IoT Sensors using Low Power MCUs
We estimate that by 2035 there will be a trillion connected devices capturing vast amounts of data. How will this be achieved, and how does AI and ML fit into the equation? Google Staff Research Engineer Pete Warden and Kishore Manghnani, co-founder and CEO of Shoreline IoT, will discuss how the combination of TensorFlow Lite Micro and power-efficient Arm microcontrollers can transform legacy industrial assets, such as motors, pumps, compressors, into intelligent systems that can be monitored for anomalies and optimized.
Enabling Intelligent Wireless IoT Sensors at the Edge using Low Power MCUs and TensorFlow, Wednesday, Oct. 9, 5 p.m., AI Coffee Bar, Booth #1237.
Next-General Machine Learning for Mobile
While many people still think of machine learning as being the domain of huge cloud servers with vast amounts of compute power, much of the expected growth of intelligence will be at the end, in smaller, more constrained environments. Qeexo, an AI startup based in Mountain View, has spent the past five years developing a lightweight machine learning platform and embedded engine that enables radical new capabilities on device categories previously overlooked.
Harrison, who is also an assistant professor of Human-Computer Interaction at
Carnegie Mellon University, will describe a vision for next-generation for
mobile devices and embedded system.
Next-General Machine Learning for Mobile and Embedded Platforms, Thursday, Oct. 10, 1 p.m., AI Coffee Bar, Booth #1237.
These are just a subset of the broadest and deepest AI/ML track Arm TechCon has ever offered. To register for the event, please visit the registration page of the Arm TechCon site. And remember Expo Passes are always free!