Cybersecurity for AI in light of Tesla AI Day

The world gets ready for a slew of new announcements at the Tesla AI Day – an annual show where the company presents its innovations and plans for various business units to the public. Innovations presented at the Tesla AI Day demonstrate how machine learning

and artificial intelligence systems can be used in robotics or autonomous vehicles.

Commenting on the 2022 edition of the Tesla AI Day, the senior data scientist at Kaspersky, Elena Krupenina said “Along with the effectiveness and usability of artificial intelligence-based solutions, the cybersecurity aspect is no less important. To ensure the security of an AI system for its users, two domains should be considered – non-transparent algorithms which can lead to unexplainable outcomes, and privacy. Why exactly them?” 

“Most machine learning (ML) models that underpin complex AI systems generate results or actions that are beyond human interpretation. These results or actions may be unexpected and unclear to humans. For example, if a robot confused and grabbed one object instead of another. The unexplainable outcomes may potentially cause risks for the system itself and for humans. Therefore, developers should consider the mechanisms and tools to evaluate and explain the non-transparent decisions made by the AI and then calibrate its parameters and metrics to avoid it. “

“AI-driven devices can use various sensors to collect data: cameras, microphones, radars, lidars, ultrasonic sensors, infrared cameras, and others. Huge amounts of data are required to ensure the high quality of AI and it’s critical to minimize the risk of sensitive data disclosure. To address this issue, it is necessary to develop explainability and privacy-enhancing technologies such as, for example, on-device processing and federated learning which minimize the data required for the correct operation of the model.”