18 Months Beats Seven Years: The New Reality of Automotive Development

Having spent years at the forefront of automotive technology development at Arm, I’ve witnessed firsthand the seismic shift from traditional hardware-software co-design to truly software-defined systems. At a recent panel discussion at the 62nd Design Automation Conference (DAC) in San Francisco, alongside colleagues from AMD, Siemens EDA and Collins Aerospace, we dove deep into what this transformation really means – and why it’s not just evolutionary, but revolutionary.
The Infrastructure imperative
When people ask me about software-defined systems, I always start with infrastructure. It’s not just about writing software that can be updated – it’s about building a comprehensive ecosystem that enables continuous innovation throughout a product’s lifecycle.
In automotive, this means creating an infrastructure that spans from cloud to car, enabling software updates and new feature rollouts even after a vehicle leaves the dealership. But here’s the critical insight: This infrastructure must be designed from day one. You can’t retrofit true software-defined capabilities onto hardware that wasn’t conceived with this flexibility in mind.
The hardware being developed today needs to be future-proofed and built with the ability to support future.
Cloudy crystal ball
This timeline challenge is something we live with every day at Arm. We develop compute platforms that eventually make their way into road-ready vehicles five years later, after going through our partners who build systems-on-chip (SoCs), then into devices and finally into cars. The qualification process alone adds significant time to this journey.

I’d be crazy to claim I know exactly what workloads will look like five years out. We can’t even predict what’s happening six months ahead in AI! This uncertainty has driven us to embrace virtual platforms and early software development in ways that seemed impossible just a few years ago.
Breaking the 18-Month lag
Here’s a concrete example of how virtual platforms are changing the game: Traditionally, when we launched automotive-enhanced (AE) IP, there was an 18-month gap between IP availability and when software developers could actually start working with real silicon. Our 2021 AE IP launch became available to developers only in late 2022 – a significant lag.
But in 2024, we did something different. Working with partners like Siemens, we had virtual platforms ready on day zero of our AE technologies launch – March 13, 2024. Software ecosystem partners could immediately begin development work, eliminating that painful 18-month delay. While silicon validation will always be necessary, this approach fundamentally accelerates the development cycle by up to two years.
China innovation
The competitive landscape has been dramatically reshaped by Chinese OEMs that have adopted these methodologies with remarkable speed – in some cases going from initial design to tape-out in 12 months.
And what struck me most wasn’t just the speed – it was the rigor. This team asked more challenging safety, security and mixed-criticality questions than any other OEM I’ve worked with. They’re not cutting corners; they’re simply operating with a fundamentally different development philosophy that embraces virtual development and cloud-based validation.
This is forcing the entire industry to reconsider development timelines. Seven-year design cycles are becoming untenable when competitors can deliver in 18 months without compromising quality or safety.
AI-defined vehicles: The next frontier
We’re already seeing the emergence of AI-defined vehicles, where traditional applications are being transformed by large language models (LLMs) and advanced algorithms. Consider something as simple as a car’s user manual: Instead of a 500-page booklet in your glove compartment, you’ll have an AI assistant that can answer questions about any indicator or function in real-time, like AWS’ in-vehicle chatbot prototype that leverages Arm KleidiAI’s integration with llama.cpp to respond to the driver in less than three seconds.

These AI applications need to run across heterogeneous compute platforms – some workloads on CPUs, others on GPUs and specialized tasks on AI accelerators. The decision of how to optimize and distribute these workloads can now be made early in the development cycle using virtual platforms, rather than waiting for physical hardware.
Security: Building it in, not bolting it on
One of the most critical aspects of software-defined and AI-defined systems is understanding security requirements from the outset. I shared an example during the panel about a parking payment feature in my own car that still doesn’t work eight months later because the vehicle’s system software doesn’t meet the security standards required for credit card transactions.
This highlights why early visibility into security requirements is crucial for hardware design. When you have mixed-criticality workloads – some highly secure, others less so – you need to understand their interoperability requirements before finalizing the hardware architecture. Virtual platforms enable developers to partition workloads appropriately across hardware, middleware and foundational software layers.
The standardization balance
Standardization remains critical, but it must be applied thoughtfully. At Arm, we’ve worked through initiatives like SOAFEE (Scalable Open Architecture for Embedded Edge) to standardize foundational software elements – boot flows, debug processes, security frameworks – while preserving opportunities for differentiation at the application layer.
This standardization enables significant software reusability across hardware generations. When you build the right standardization into your software stack, the transition from one generation to the next becomes far more efficient, saving both time and cost while enabling the optimization needed for competitive advantage.
The skills evolution
The shift to software-defined systems isn’t requiring entirely new skill sets, but it is driving significant cross-pollination of expertise. At Arm, we’ve hired people with 15-20 years of experience in vehicle-level and ECU-level modeling – capabilities we never needed before but are now essential for supporting our ecosystem partners.
This trend is happening across the industry. Hardware teams are adopting more software-like development methodologies, including continuous integration and deployment practices. Meanwhile, software teams are gaining deeper hardware awareness to optimize their applications effectively.
System-level thinking
What excites me most about this transformation is the emergence of true system-level modeling capabilities. We’re no longer just designing individual components; we’re modeling entire ecosystems from the compute IP through the SoC, into the vehicle and extending to cloud infrastructure.
This holistic approach enables us to find and solve macro-level problems early in the design process – wrong cache sizes, incorrect core counts, inadequate cluster configurations – long before they become expensive silicon respins or, worse, field failures.
The infrastructure to enable software-defined systems represents a fundamental shift in how we approach product development. It’s not just about making software updateable; it’s about creating an ecosystem where innovation can continue throughout a product’s entire lifecycle.
As we continue down this path and embrace future AI-defined vehicles, the companies that master this transition – building the right infrastructure, embracing virtual development and fostering ecosystem collaboration – will define the next generation of automotive technology. The question isn’t whether this shift will happen; it’s how quickly your organization can adapt to lead rather than follow.
Click here to read more about how AI is transforming automotive design.
Any re-use permitted for informational and non-commercial or personal use only.