View all Intelligent Security Summit on-demand sessions here.
Simulation has become a crucial technology to help companies reduce time to market and design costs. Engineers and researchers use simulation for a variety of applications, including:
- Using a virtual model (aka a digital twin) to simulate and test their complex systems early and often in the design process.
- Maintaining a digital common thread with traceability through requirements, system architecture, component design, code and testing.
- Expanding their systems to perform predictive maintenance (PdM) and failure analysis.
Many organizations are improving their simulation capabilities by incorporating artificial intelligence (AI) into their model-based design. Historically, these two areas have been separate, but when used effectively together they create significant value for engineers and researchers. The strengths and weaknesses of these technologies are perfectly aligned to help organizations solve three key challenges.
Challenge 1: Better training data for more accurate AI models with simulation
Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean, and cataloged data. While most AI models run with fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. Unnoticed, these models generate inaccurate insights or fail outright, causing engineers to spend hours trying to figure out why the model isn’t working.
Simulation can help engineers overcome these challenges. Instead of optimizing the architecture and parameters of the AI model, it has been shown that the time spent improving the training data can often lead to broader improvements in accuracy.
incident
Intelligent Security Summit On Demand
Learn the critical role of AI and ML in cybersecurity and industry-specific case studies. Check out on-demand sessions today.
look here
Because a model’s performance is so dependent on the quality of the data it’s trained on, engineers can improve results through an iterative process of data simulation, update an AI model, observe what conditions it’s not good at predicting, and more collect simulated data for these conditions.
Challenge 2: AI for new product features
Simulation has become an important part of the design process for engineers using embedded systems for applications such as control systems and signal processing. In many cases, these engineers develop virtual sensors, devices that calculate a value that is not directly measured by the available sensors. But the ability of these methods to capture the nonlinear behavior present in many real-world systems is limited, so engineers are turning to AI-based approaches that have the flexibility to model the complexity. They use data (either measured or simulated) to train an AI model that can predict the unobserved state from the observed states, and then integrate that AI model into the system.
In this case, the AI model is part of the control algorithm that lands on the physical hardware and usually has to be programmed in a low-level language like C/C++. These requirements can limit the types of machine learning models suitable for such applications, requiring technical professionals to try multiple models and compare accuracy and on-device performance trade-offs.
At the forefront of research in this area, reinforcement learning continues this approach. Instead of just learning the estimator, reinforcement learning includes the entire control strategy. This technique has proven effective in some demanding applications such as robotics and autonomous systems, but building these types of models requires an accurate model of the environment – never a guarantee – as well as tremendous computing power to run large numbers of simulations.
Challenge 3: Balancing “right” and “now”
Companies have always struggled with time to market. Companies that offer customers a flawed or flawed solution risk irreparable damage to their brand, especially startups. The opposite is true, as “also-rans” struggle to gain a foothold in an established market. Simulations were a major design innovation when they were introduced, but their constant improvement and ability to create realistic scenarios can stymie engineers. Too often, companies try to create “perfect” simulation models that take a long time to create, risking that the market has evolved.
Finding the right balance between speed and quality requires technical professionals to recognize that there will always be environmental nuances that cannot be simulated. AI models should never be trusted blindly, even when used as proxies for complex, high-fidelity systems.
The future of AI for simulation
AI and simulation technologies have built and maintained their own momentum for nearly a decade. Now, given the symbiotic nature of their strengths and weaknesses, engineers are beginning to see a lot of value at their intersection.
As models continue to serve increasingly complex applications, AI and simulation will become even more important tools in the engineer’s toolbox. With the ability to develop, test, and validate models in an accurate and affordable way, these methods will continue to grow in use.
Seth DeLand is Data Analytics Product Marketing Manager at MathWorks.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers