FUTURE OF AUTONOMOUS VEHICLES - WITH PAUL NEWMAN

 

AUTONOMOUS VEHICLES: THE NEXT GENERATION

 
Bruce McCabe and Paul Newman
 

What is driving the NEXT generation of autonomous vehicles? How are scientists upping the innovation rate? Why are AVs utterly inevitable, everywhere? How will they transform industries, cities and even economies?

To get a deep sense of where we are headed, I checked in with Paul Newman, a trailblazer in robotics and autonomous systems. Paul is BP Professor of Information Engineering at Oxford University, Founder of the Oxford Robotics Institute, Fellow of the Royal Academy of Engineering and the IEEE in honor of his outstanding contributions to robot navigation, and Founder of Oxa (formerly Oxbotica) a mobile autonomy company involved in commercial deployments around the world.

Paul is a BIG thinker about the future of AVs!

MEETING PAUL NEWMAN

As you will hear, Paul is an extremely positive person and passionate about his field, especially about all the good outcomes that can be unlocked with autonomous vehicles. He is opportunity-focused and he *cares* – which is why I like him so much.

And what’s not to love about a global AV guru who admits he failed his driver’s test five times and “may have reversed into the wall of the test center” on one of them? By the way, I mistakenly called his company “Oxfa” instead of “Oxa” when I was introducing this episode (doh!) and I’m blaming him, because he had me laughing even before we hit ‘record.’ That’s my excuse anyway :-)

We recorded at Oxa HQ in southeast Oxford, UK. Perhaps as a continuous reminder of how Paul and his colleagues are getting on with building real AVs for real industries, an Oxa-automated electric Ford Transit van slowly circled the building as we recorded. The Ford transit is the ubiquitous workhorse of UK logistics, so I can think of no better symbol for the future of UK logistics than an autonomous E-Transit!

We talked about many aspects of the future of AVs, but mostly, we talked about OPPORTUNITY. Paul is profoundly optimistic about the potential for AVs to drive huge positive change.

Do you consider widescale AV deployments inevitable? I do. Click below to listen to Paul and understand why.

And scroll down to find some of my key takeaways, as well as a full transcript of our conversation. You can also subscribe to “FutureBites with Dr Bruce McCabe” on Spotify or Apple, or wherever you get your podcasts.

Enjoy the podcast!

 
 

KEY TAKEAWAYS

DRIVING “Many somewheres” to REACH “one everywhere.”

I love that line! It captures in a nutshell Paul’s vision for how he expects autonomous vehicle populations to grow, and the fastest and most efficient research pathway to get there. Each specialist AV fleet deployment (in ports, airports, bus routes, mine sites, on a university campus) yields a rich source of learning and experience that builds robustness and trust in that context, and informs the next, thereby progressively filling out the world.

As opposed to ‘shooting for the moon’ and trying to build AVs for the most complex open environments from the outset, as well as trying to convince everyone such AVs are robust and safe from the outset.

And it fits with the specific deployments customers want and what they are ready for. As Paul said: “No customers come and say ‘I want to drive the world,’ ever.”

He’s especially excited about UK bus routes. Cities are asking for them, and “2 billion instances where a human swapped money for transport on a known route” is a high-value opportunity AND a high-suitability opportunity!

Universal autonomy: one operating system, many platforms

Just as an operating system provides a consistent interface across different hardware, so too Paul’s approach is a universal platform that allows continuous autonomy innovation without being tied to specific vehicle manufacturers. He compares it to Amazon Web Services, where companies can leverage the cloud computing resources to create value in their unique contexts, whatever they might be.

Partner on the hardware

Extending on the above, and on the basis that experts should focus where they have most expertise, Paul has arrived at a model of having Oxa look after the AI autonomy systems while partnering with industrial users and original vehicle manufacturers on the business context and the hardware. Each deployment becomes a joint venture with shared IP.

Hence, Oxa’s varied deployments with Beep shuttle-buses, airports, mines, buses, delivery vans, etc. And again, it accelerates innovation and feeds the philosophy of driving many somewheres building up to one everywhere.

AI supervising AI

We talked a lot about humans always staying in the loop in future, in a one-to-many supervisory capacity. Added to this, Paul is also making a step-change in AV safety by adding a ‘higher order’ layer of AI. When the AI navigation system is forced to make a fuzzy interpretation for a situation it has never encounted before, the higher order layer can make a separate, over-riding decision over whether in these circumstances a fuzzy interpretation is perfectly acceptable or simply not safe enough.

“There's a thing called the safety path and there's a thing called the performance path. The performance path does all your fancy thinking. It's the guy that understands very beautiful accelerations and lane changes and makes long-distance plans. The safety path is the guy that's going, ‘I don't know how you do your highfalutin thinking, but I'm sitting here. If I get into a moment where I can't get you out of trouble, if that's a bad idea, I'm grabbing the wheel.”

His flying student / flying instructor story was a great metaphor. I called his ‘safety path’ a ‘meta layer of safety oversight,’ but Paul pointed out there is another very significant benefit: the safety path allows you to “put more sophisticated stuff in the performance path and iterate on that faster.”

In other words, it makes it possible to innovate faster. And that is a very big deal.

And fyi, the AI-supervising-AI approach aligns with the broader trend of multi-agent AI collaboration which is also radically improving outputs in non-robot AI applications.

Focus on AI teaching

Most AI research conversations start with learning models, but nowadays Paul is especially focused on AI teaching models, where he is making big gains in overall AI performance.

His comparison with a young engineering graduate was the perfect starting point: “If the things that you see [in the field] are different from the syllabus you were given, then that's interesting feedback for syllabus generation.”

So when the autonomous bus first encounters 120 people less than four foot tall (a school trip) and slows and hands over to a remote human supervisor, this lso becomes an opportunity to improve the AI syllabus based on the AV’s assessment of this unusual event’s “distance between pre-learning and actual experience” AND an opportunity (if the AV’s interpretation of this novel situation turned out to be correct) to extend AV assurance/certification for that bus route.

Paul is scaling and accelerating this kind of teaching using adversarial learning, with AI’s generating tens of thousands of ‘nightmare’ scenarios to train and test other AIs.

AIs teaching AIs: another accelerator!

Vehicle data: an obligation to share

We spent a lot of time on one of my favorite subjects, the future of vehicle-to-vehicle information sharing, and I was pleased to hear Paul agrees with the power of shared metadata for optimising vehicle systems, both in real time (think: Internet of vehicles) and for perpetuating AI-navigation learning across all vehicles, forever. This is one of the developments that makes the future of AVs so inevitable and so BIG.

Paul went further. He sees a social contract to share, and indeed that in future data sharing will be required by regulation, because that’s exactly what we would expect in every other context. Again, look at air-travel. If there is an incident, we EXPECT the informaton about the world interpretation and the decision process to be shared so we can improve safety. Why would the future be any different for systems populated by automated ground vehicles?

So a big future question to be tackled by the AV industry is: how much to share? What metadata to share across heterogeous fleets? What learning is to be shared? What’s the *most relevant* information? That last one is vital because you can’t share everything. As Paul put it, “that could be an N-squared data sharing problem, if you're not careful.”

He and his colleagues are already making their data available to authorities, and he mentioned the Autonomous Vehicles Act in the UK as an important part of the future journey.

All of which ties back yet again to Paul’s AI-teaching focus: shared AV data is exactly the feedback-loop fodder that turbocharges the AI-teaching.

Tropic cascades

Did I mention Paul thinks big?

A huge part of his thinking is the way AVs will re-shape so much beyond transportation. “It's called a tropic cascade, where you introduce something somewhere else in an ecosystem and it totally transforms the ecosystem. And I tell you, autonomy is a tropic cascade.” Removing the person allows different vehicle shapes, different procedures, removes the need to care about driver safety, allows different energy requirements, different vehicle utilizations and enables ‘variance control’ where reduced uncertainty around where and when machines turn up and do their jobs produces big efficiency gains in operations.

I especially liked Paul’s neat example when they were automating ports and realised that one of the savings was turning the lights off at night!

Cities going ‘all in’ on AVs

I couldn’t help but throw in my candidate, Singapore, for first city to go ‘all in’ on AVs and use policy to radically improve efficiencies across the entire transport system and release big secondary benefits across city services, logistics, safety, pollution, liveability, residential planning, re-zoning and real-estate re-purposing.

Paul agrees with the end-point, and he added UAE and Saudi Arabia to my list, but again emphasizes the progressive journey of driving many somewheres leading to driving everywhere:

“The 100th somewhere is faster because of the 99th. And it's faster because of the 98th, and each of those is satisfying actually what a customer wants in my airport, my bus route, my city, my delivery route. And they join up. And then you build the technology to enable that to get faster. So you build yourself a flywheel, which is better than going, ‘I will sit here darkly until it's all done.’”

Neither of us knows where or when it happens first, of course. It might be in China. Whenever and wherever, I can’t wait. And I meant what I said about the human feedback loop: when people travel and experience the extraordinary benefits, other cities will follow faster.

It's always about people

There was much more, and we had such a lot of fun making this podcast, but perhaps a fitting way to end is with Paul’s generous sentiment about Oxa’s success being 100% tied to the people around him.

Yes indeed. it’s always about people, isn’t it? Even in a robotics company.

 

ENJOY THESE FUTURIST INSIGHTS?

If you enjoyed this, consider subscribing to “FutureBites with Dr Bruce McCabe,” which I started as a way to share my interviews freely with wider audiences. 

And if I can help your organization explore pathways to a better future at an upcoming conference or event, please reach out. As a futurist speaker, I deliver keynotes that inspire leaders to think bigger about the opportunities in their future. It’s my way of making a difference.

 

INTERVIEW TRANSCRIPT

Please note, my transcripts are AI-generated and lightly edited for clarity and will contain minor errors. The true record of the interview is always the audio version.

BRUCE MCCABE: Welcome to Future Bites, where we're looking today at the future of autonomous vehicles with Paul Newman. Welcome, Paul.

PAUL NEWMAN: Hey, good to be here.

BRUCE MCCABE: [laughter] Well, it's a pleasure to be here over at “Oxfa” in Oxford, to talk to you.

PAUL NEWMAN: Oxa! Oxfa? I don't know who those guys are.

BRUCE MCCABE: Well you used to be Oxbotica [laughter]

PAUL NEWMAN: Yeah, we used to be. We founded it, what, a decade ago. And then a couple of years ago, we went. We need to change the name because it's too long.

BRUCE MCCABE: You're making life hard for people like me because I can't keep up. It’s too much.

PAUL NEWMAN: Well it’s still got the Oxford and autonomy in it. We just got the ‘botica’ bit out from the middle. There was a mode that perhaps someone thought it sounded like a hair product. So, right now I think Oxa is quite punchy and we've gone with a purple colour, so yeah, it's Oxa Autonomy now.

BRUCE MCCABE: Beautiful. Those consultants are worth every penny, aren't?

PAUL NEWMAN: You may say that I can't possibly comment.

[laughter]

BRUCE MCCABE: So the reason we're talking we met years ago, and you're a guru in robots. And I want to give people just a sense of all the things you've done, and why I'm excited to be here. If I look at your bio, right, what do we call you? You're the BP Professor, of, let me get this right, Information Engineering at Oxford. You're also the founder of the Oxford Robotics Institute. A fellow of the Royal Academy of Engineering and of the IEEE for your outstanding contributions to robots and robot navigation. 2014 you founded Oxbotica, the spin-out, which has been doing amazing things, which we're going to talk about. I think you've even been given a bit of a tap on the shoulder from the king!

PAUL NEWMAN: Oh, that's very kind. Yes, I was mentioned in the honours. That's a nice thing, but, yeah, great to have you. I mean, I've had the most privileged life and it started in Australia. Well, I mean, I'd say my childhood was pretty privileged, where I come from, in the UK, in a beautiful part of the world. But then, moving out to Australia in 1996, there was a day, and it was 15th of March 1996.

BRUCE MCCABE: Oh, that's specific.

PAUL NEWMAN: Yeah, very specific. So I remember it really well and I crossed the road, crossed the Parramatta Road, and was talking to someone. I was just starting my PhD. And I said what you're working on? This guy sort of explained the basic idea who's working on, and I sort of distilled that down to: wouldn’t it be great if machines knew where they were and what to do?

That's just such a beguiling problem, yeah? But everything I've done since then has been obsessed with how can machines know what's going on and what they should do. And it was a halcyon days there of robotics and we thought, we're just on the cusp of this. And I joined the first phd student, australian sense of field robotics out there, a guy called Hugh Durrant-Whyte who I'd met here in oxford actually …

BRUCE MCCABE: I know Hugh!

PAUL NEWMAN: Yeah, so he was my, he was my supervisor, it's great. And he said, look, you know, you could stay in Oxford to do your phd, go to Cambridge – no that'd be a terrible idea – but then, you know so, I thought like I'll go to, I'll go to australia and started out there and I got to build a submarine and worked on some fundamental problems of robotics and also building submarines because I'm a systems builder. And it was really I look back and it was around about then you know we're starting to get big machines to move in ports and mining machines, and I think it's then I got my utter obsession with systems.

BRUCE MCCABE: Wow.

PAUL NEWMAN: Llike build it, press, go and does it go work Right? At the same time I was becoming obsessed with academe as well, because there's something unnaturally productive about that environment that you just need to go, ‘all hail.’ There's something about, ‘I think I can it doesn't work yet but it will,’ and you get to work with really bright people. It's kind of like I think, as you grow to be a professor, you're kind of like a CEO in a funny way, because you've got to do the basic product development, that's your research, right. You've got to do the fundraising because it's got to be supported. You've got to do recruitment.

BRUCE MCCABE: There's no hiding.

PAUL NEWMAN: You've got to do it all, right? And I think it's no surprise that some of the big contributions to robotics has come from people that have come from academia originally. And I look at that trajectory -- doing ocean engineering, submarines at the bottom of the ocean, doing some work on space robotics -- and all the time building systems that do something useful that we need as a species. That's really cool! And it never gets old, ever, when you build some software with a great team and that team did it! And in some way you or, even better, someone else, presses go and off it goes. And what a blessed time to be doing autonomy with the explosion that we've had in AI and what that means.

BRUCE MCCABE: Yeah, these last 10 years have been phenomena.

PAUL NEWMAN: Penomenal, right? Happy days. And there's a whole load of interesting questions in there about what that means for robotics and for autonomy and what that means for trust. But that's just another multiplicative factor of another branching that's happened and we've got a whole new set of questions, um, on how we do that. But you know it's, it's just like um, it's like being given electric! Do you know what I mean? It's like you were all mechanical before and there were steam engines, and then suddenly there's electric. And you can do different things with that.

So I'm going to wreck your interview here, because check out this chair …

BRUCE MCCABE: Oh it's squeaky, that's all right. Yeah, so I always say the luckiest people on the planet are the ones that work out what they want to do early. So that’s you. You really got that moment where you got clarity and you got focus and you said that is the thing I want to do.

PAUL NEWMAN: Yes, and I was lucky in that sense. And you know I'm a fan and I burn with a nuclear fusion fire for stuff. I guess I was fortunate in that, I remember talking to my now wife about what I did. The most amazing thing: I want robots to never get lost! And ‘lost’ being a proxy for always know what they're doing. She was like: ‘Cool. Sounds niche.’ And she was right because it was.

Now, look. It's everywhere!

I mean, for how long do we really expect to have to have one operator in every vehicle located in the vehicle? So that's some sort of heritage from horses pulling traps. Ok, it's something like 3,000 years old. And if you remove that and you look at a world where actually you could have one operator for “n” vehicles -- 17 is a number I often choose for that, not because it means anything, but just feels like that's extraordinary, right? And those operators can be anywhere. That's extraordinary!

Can I tell you a story about wolves?

BRUCE MCCABE: Yeah, I'm slightly worried we're going way off track. At some stage we're going to get to vehicles! [laughter]. But go on, tell us about wolves.

PAUL NEWMAN: Right, because this is an analogy and you're going to go okay, this guy's a bit wild, but I think this works really well. And it's going to come back to why autonomy changes vehicles and why changing autonomy and vehicles changes ecosystems of how we do stuff. So a while back we reintroduced wolves to Yellowstone. Great Noble animals. Awesome. And wolves like to eat other four-legged things that run around and those things don't like being eaten, right, so they kind of run away and they keep moving. But also the unhealthy animals get picked off first, because wolves tend not to go for the buff elk, they tend to go for the weak ones. Which meant there wasn't so much overgrazing. And then plants start to grow back. Saplings grow. Saplings grow on riverbanks, which means when the rains come, so not so much soil washes into the river, so the rivers don't become healthier. The fish move back, the beavers follow, then the beavers dam the rivers and the rivers change speed and mountains change shape …

It's called a tropic cascade, where you introduce something somewhere else in an ecosystem and it totally transforms the ecosystem. And I tell you this autonomy is a tropic cascade.

When you remove some of these dependencies that we have right now for a human with all of their skills, one per vehicle, situated in the vehicle as it's doing it … when you remove that, we'll have a whole suite of invention by other businesses to say, oh, I can do this differently now. Yeah, yeah, that's it. So you'll have different vehicle shapes because, say, I am, oh, I don't know transporting suitcases around an airport and I really don't need to care about the safety of the human in the front of the vehicle because there isn't one. I'll make that vehicle completely differently. I'll have completely different systems. I'll have different energy requirements for it.

I might not build a generalist and I might say, actually I can have different vehicle utilizations because I don't have to optimize to always be using the driver, because the driver can be switching their sort of their presence to one of 15 different vehicle types. As an example, we'll mine differently because we won't have a dependence on a human and the vehicles will be different sizes. We'll get different ore out from different densities in different mines. And I love this. And just remarkable second-order things happen as well. I mean, I remember noting, when we were doing the port autonomy, that one of the big savings was that they turned the lights off at night.

BRUCE MCCABE: Oh, brilliant. Isn't that cute?

PAUL NEWMAN: Yeah, that's really cute. Because robots don't see in the same spectrum.

BRUCE MCCABE: Yeah, I love that stuff. They don't need the lights, they don't need the overheads.

PAUL NEWMAN: Yeah, they shine special radar lightS.

BRUCE MCCABE: I get excited when we start talking systems like this. Yeah, because it's an internet, little internets of vehicles, if you like. And it's like how do you optimize the packets and the movements of the packets? It's not one vehicle, you're optimizing a system of vehicles.

PAUL NEWMAN: Right! And they’ll talk to you in a different way. And their energy efficiency could be quite different. Their utilization could be quite different, because they're being utilized different, their maintenance could be quite different. They could tell you more about their maintenance. They might be able to change roles.

We often think about the economics of this as a system. Often people come at it with a reason, but it's not the main reason. I don't think there'll necessarily be fewer roles in operating vehicles. I don't think there's going to be fewer jobs. It's like saying, well, you know this whole computing thing, there's going to be way less jobs.

BRUCE MCCABE: Well, there never is. There never is. In human history it's never happened.

PAUL NEWMAN: Exactly! It's never happened. When a new technology comes along, we never go, I'll have fewer of it.

BRUCE MCCABE: We do more work.

PAUL NEWMAN: Yeah, we'll find different ways to do it, and so it's important to look at these sort of second order effects and actually where autonomy and self-driving vehicles are going to help, and one of them is often just in variance reduction.

So, if I could be very highly predictive about when my suitcases are going to arrive, or when my truck's going to be underneath a shovel, or when I'm going to need maintenance, or making sure there's enough space between me so I'm not going to have to pause at a roundabout as luggage comes through, whatever these things are …

BRUCE MCCABE: the outages

PAUL NEWMAN: All of those things, they're inhuman in the way they can do it, because they're not human. It's really interesting, yeah, so if you think about the sort of the inefficiencies and difficulties we have for logistics of coordinating, oh I don't know, 800 humans on an airport “are they all coordinated?” Versus highly, highly obedient hedgehogs (I often use that because they do exactly what you say) and the variance control, you can get uncertainty reduction. That's also quite interesting as well from an operational perspective. And that's before you get to any of the safety stuff.

I mean, we're really dumb at simple tasks. The number of times people ding people and if you can remove that and have machines … because the one thing a machine is never going to do is just be distracted. It may have subhuman competencies, right, and may not be able to solve those corner cases like humans go: ‘oh, I can see what I can do here.’ We won't be doing that bit for a long time, but I tell you what they're never going to miss a person because someone was screaming, or they had a big one the night before, or they're worried about something. All of those things that make us human don't interfere or apply, in ‘I'm moving from A to B.’

BRUCE MCCABE: You've already done some really interesting deployments. Can we just run through some of those? Because I remember the one that impressed me early on was you were roboticizing airside vehicles at airports.

PAUL NEWMAN: Yeah, so back in god, that was ages ago, hey. So I think that was like in 2017. We were airside in Heathrow in a small vehicle that at that time we had built. So at that point we were still, you know, retrofitting some vehicles ourselves. I mean, I'm kind of obsessed about this idea of universal autonomy. Which is why we did airports and it's a big sign behind me, but this is audio.

BRUCE MCCABE: Big part of your branding now. Universal autonomy. You have to explain it.

PAUL NEWMAN: Yeah, so look, the idea is, if a human can drive a car, they're pretty good at probably driving a shuttle or a baggage handler. The skill seems transferable.

BRUCE MCCABE: Absolutely.

PAUL NEWMAN: Because what do you have to do? You have to control your speed and you have to control your steering, and then you've got to do that whilst predicting what's going on in the world. Okay, so what's different? Well, the hardware is changing. So my belief is that there's a common set of skills that you want a driver to have, and it's the software that enables this transformation. It's not the hardware. And analogy I like to use is, you might be a mac user, but you probably know people who've been a windows user for a while, right, but I suspect over the life you've had many different brands of computer underneath, you might have an ibm or a compac or a lenovo or whatever they are, and you probably were doing different stuff during your life as well. So you were writing different files and you were doing different work. But that substrate of the operating system that provided files and things that you could type and printer services and that coordinated your hardware computer, that kind of stayed constant with you. There was a sort of a common interface. So build that for autonomy, so you could put that on the vehicles that we haven't got yet and the vehicles we currently have, because there are brands of computers that haven't been invented yet.

BRUCE MCCABE: So right through the interface, not just the intelligence, yeah.

PAUL NEWMAN: Yeah. And I want to build something like AWS, in that other businesses then get to innovate and use that platform to do things with it. So that means I don't have to become -- I'm very interested in this -- a ports expert. You know what? There are people who've been doing ports and they're nailing it. They know how to do that.

But if you enable them with a technology that said, hey, here is autonomy as a platform that you can coordinate vehicles and you don't have to buy a particular brand of vehicle to do that and you're not going to buy a vertical and end up with 60 different autonomy systems you have to coordinate there's something that's different. You can say actually, I'll be a co-founder with you in this industry and we will build fleets of vehicles that can autonomize and be autonomized and you go bring the knowledge of being a miner, a port operator, an airport operator or a bus operator. You bring that to that and do the thing that you do, and then you generate value with that. And that's exactly what's so compelling with AWS. This will handle what you want to use cloud computing for, but here's a bunch of APIs that allow you to do cloudy things and let you scale, so build that, and you're abstracted from the hardware a little bit as well.

BRUCE MCCABE: So that's the business model now. Was there a transition point?

PAUL NEWMAN: No, it was always like that.

I feel quite strongly that personal single occupancy personal vehicles like your sedan that you buy is the last place that this technology is going to arrive in its fullest form.

BRUCE MCCABE: Yeah, absolutely, the coolest things are far more industrialized and interesting

PAUL NEWMAN: Yeah. So the competencies to drive my car are the same as driving a forklift, so you need to stay anchored. This is really important from the technology perspective. You need to stay anchored on that north star. On the way to doing that, you need to make sure that you haven't overfitted to a particular type of domain. Like, I can only do roads on the right or left-hand side, when someone says, can you do forests as well? or, can you do a port? And then, as you're building that, you need to build something that's explainable as well, Since in this new world, our systems have to be explainable.

So in a minute maybe we can talk about the sort of fundamental technical vision and strategy which is called being ACE -- available, configurable and explainable -- about how we think we do that and how that fits into where we deploy and with who we deploy autonomy first, whilst not getting caught up on trying to have to do driving everywhere across the whole planet before you can drive anywhere. Our model is, ‘drive lots of somewheres and in doing so you end up driving everywhere,’ but you better make sure that you can do those somewheres ever quicker and be assured about doing those somewheres so you can get insurance and assurance. Because it seems wild to say yeah, yeah, yeah, you should trust me because I've built the everywhere driving machine.

BRUCE MCCABE: Oh, absolutely. I'm going to immortalize that line. ‘We need to drive lots of somewheres to learn how to drive the everywhere.’

PAUL NEWMAN: Yeah, and you get driving everywhere by going through the somewheres. Yeah, and there is technology that you need to do that. It's not like a chicken strategy. There is definite technology that you need to be able to rapidly produce a syllabus that allows you to work in those somewheres, because until we solve the everywhere problem -- maybe one day we will -- there are likely to be gaps between what you need to do in a particular installation. Because no customers come and say ‘I want to drive the world,’ ever. They come and go, ‘I would like to do this, Paul.’

BRUCE MCCABE: Okay, before we get into some of the technical pathways, let's put some more color into this, because it'll help people contextualize what you're saying. Let's just go back to say those airports. You've done a variety of campuses that I've seen lately in partnership with Beep, which are little mini buses. So beautiful.

PAUL NEWMAN: Yeah, they're little shuttles, yeah, little shuttles driving around in Florida.

BRUCE MCCABE: Florida State Campus.

PAUL NEWMAN: Yes, yes. Lake Nona. Lake Nona, that's a small sort of city in Florida near Orlando. And we're over in California. And then we've done quite a few sort of efforts here in the UK through various cities.

BRUCE MCCABE: Again, shuttles within zones? Is that the idea?

PAUL NEWMAN: Yes, Yes. We've done some mining work so we did a few years in mining. That was good, fun and that was attractive. We've done some work with Ocado, which is a delivery for groceries sort of delivery, groceries last mile.

BRUCE MCCABE: Okay, where was that?

PAUL NEWMAN: That was three or four years ago. That was in London, in East London, down in Greenwich.

BRUCE MCCABE: Interesting context. So within the Greenwich precinct, if you like. Doing deliveries.

PAUL NEWMAN: Yes. Here's the thing that sort of defines all of those things. They're repeatable routes, yeah, yeah, and that makes a lot of sense. That's another form of saying you do somewhere before you do everywhere.

here was a time where I thought I was an interesting guy and now I think maybe I'm not, because … I've got really interested in buses. And that feels terminal, right?

[laughter]

BRUCE MCCABE: They're a lot more part of our future than some of the sedan-based models.

PAUL NEWMAN: Do you know how many bus trips were taken in the UK pretty much every year? It's like above 2 billion. Excuse me? So wait, there are 2 billion instances where a human swapped money for transport on a known route. Mm. Right, and no cities asked us for, can I have more single occupancy vehicles please? They're all asking us for bus routes. And there's lots of good things there.

BRUCE MCCABE: So many good things.

PAUL NEWMAN: When you think about, how am I going to come up with an assurance case for that? How am I going to say I'm very, very sure this is going to be safe and it's going to operate here, because I can rehearse and I can say strong things statistically, rather than going, you should trust me and I can go into all the places and there, it is all going to be fine most likely. There are very strong things you can say about that and, again, it's building that technology that allows you to do that rapid installation. So, yeah, and I guess our technical strategy is anchored around being ACE, available, configurable, explainable. One of those principles is to be available, is to go early with customers and learn by building systems. Because, as a systems builder, every single time you field it, the universe will tell you you're an idiot within 30 minutes and you've forgotten something, and it's just the best teacher. So you get out there with customers, you understand what it means to be working with them, and then you're humble about it and you make sure you build a business that can iterate quick.

And that's always a bit of a battle. Businesses don't naturally stay in a place where they're always up for quick iteration. You've got to keep pushing on that. But every time that we've done a deployment, we find something and we're always deploying, we're always deploying. And then sometimes you go to a place and you go, oh yeah, there was one in Germany where we had this lavender bush of doom. And what would happen is I remember this one, my word. So it was late, late summer and there was this huge, glorious lavender bush on the side of the road, and every time a truck went through in front of the vehicle, it caused the vortex behind it and this lavender bush just shot out into the road, and for all the money in the world it looked like a thing … because obviously, yeah, you don't build a self-driving system that says well, I, I understand that there are things called lavender bushes. Well, no we have to have the thing class.

BRUCE MCCABE: I see. That's the catch-all for everything.

PAUL NEWMAN: Yeah, we have to. Otherwise, I mean, you could naively go well, I need to spot, and then you would enumerate the kind of things. Well, what happens if you do see a donkey and that's not on your list? Yeah, you need to have a ‘ I don't know what it is, but it's just a thing’ and I shouldn't hit it. And lavender bush certainly hit the ‘thing.’ Of course, lavender bushes, you know they were seen by the laser and the camera, but it wobbles around. So there's an interesting thing there about how you know ... So a human goes oh, it's OK, it's a lavender bush, I know it's permeable. So then is there a secondary category of permeable? And that's a thing that you physically see and can sense with different senses, but it's okay to drive through it? I hope you get that one right.

BRUCE MCCABE: Tough problem.

PAUL NEWMAN: Tough problem. But then you're definitely going to need that if you're going into agriculture, because that's the whole point of harvesting -- you just drive through stuff. Yeah right, it's really interesting when you start decomposing these into what products.

BRUCE MCCABE: And presumably you came up with a lavender bush type solution.

PAUL NEWMAN: Yeah, well one of the things you could do is you could remember that at this place, there is a lavender bush of doom. That's a very simple, very pragmatic way to do it. Instead of solving I will understand different kinds of vegetation, so I was into it you could say, well actually, this is a repeatable area, I'll generate a syllabus, I'll generate some training that says ‘in this area, you have a heightened risk of seeing things like this and a human has said yeah, I understand what's going on there.’

BRUCE MCCABE: That reminds me, I once had a conversation with William “Red” Whittaker at Carnegie Mellon. We were talking about the DARPA challenge and all the things he did there. But he said, on the future of vehicles, just a little one liner he came up wit: if one vehicle slips on some black ice on one corner in Colorado at one time of year on one time of the day, that every other vehicle should know that you might slip on black ice, at that time of the day.

PAUL NEWMAN: Yeah, and look, this is a huge point here, right? So, look, I failed my driving test five times …

[laughter]

BRUCE MCCABE: Oh, that's good to know!

PAUL NEWMAN: … Yeah, I mean in the third one. In the third one …

BRUCE MCCABE: Sorry, on the CV of Oxa, we'll just let everyone know the person developing all of those robot navigation systems … CANT DRIVE!

PAUL NEWMAN: It's often called the engineers' deficiency theory, that they kind of work on solving the thing that they're innately bad at themselves. I'm also absolutely shocking at map reading.

BRUCE MCCABE: Oh, that's terrific as well! [laughter]

PAUL NEWMAN: So on my third test, just finish this off. On my third test, I may have reversed into the wall of the test center, suffering a forwards-backwards ambiguity problem, so I may have smacked the vehicle into the test wall.

BRUCE MCCABE: If only we had video!

PAUL NEWMAN: But here's the thing on that. As humans, we start to drive with only our own experiences. So at 18, I only had what was inside my cranium, and my exuberance. Okay, that's insane, right? But now think about what happens as we move forward. Every vehicle knows about the benefits of every other vehicle ever. That is ..

BRUCE MCCABE: It's profound.

PAUL NEWMAN: That is profound. That’s right. And it's more than data sharing. So you know, one example -- and Red's example there is a good one – like, it would be reasonable to expect that every vehicle has been told there can be black ice here. You can imagine someone going, ‘why did you not share that data?’ Right?

And so that's interesting, because that starts to then poke at something about explainability, about what humans would expect from a social contract to be true, about how vehicles are reasoning about their world. And I think about this a lot because there's a technical perspective that might go well, actually it should just learn about black ice. But there might be an expectation from society that well, hang on. There's been two accidents there. Why did not all vehicles know that this corner was sketchy? And I would agree.

BRUCE MCCABE: And it extends into -- let me know what you think – but there's a world of vehicle-to-vehicle communication that we've been dabbling with in different ways. Do you see a future where we have kind of standardized the metadata that's being exchanged between vehicles, all vehicles?

PAUL NEWMAN: Yes. I think more strongly than that. Okay, I think that it will be more than a social contract. I expect there to be regulatory requirements for things. The term used for it is in-use monitoring. Such that the vehicles will transmit and make available to regulators their perception. We can argue about what perception means, but of… there's a lot of smoke coming from right there.

BRUCE MCCABE: I hope that's not one of your vehicles.

PAUL NEWMAN: It's very close to where I parked my car.

BRUCE MCCABE: Drama in real time!

PAUL NEWMAN: Anyway, I'm just going to move on and that will be future Paul's problem [laughs] Anyway, Vehicle to vehicle. In-use monitoring. So, there's a new law in the UK that's coming through. It's called the Autonomous Vehicles Act and it anticipates a very smart regulator that's in the loop. So we think, and we call our technology yellow bird, where the vehicle is called yellow bird because it's like a cross between twitter tweeting a stream of what's going on, and a canary in a mine, that's sort of a safety thing, so we call it yellow bird. And and it's transmitting a representation of what the vehicle thinks is going on, and that is the context on which it makes its decision.

Now, coming up with those representations is a ton of AI on both sides. There's a ton of AI that says, well, this is what I thought the world looked like. And there's another bunch of AI that says, well, given that, this is what I'm going to do, and that's interesting because that's like what we would call an inductive bias in there that says, here's an API where there is a representation. Now, that's not the only representation that the vehicle uses. It can have very complicated manifold that are coming through, very complicated representations of state that are not humanly translatable. They might be there as well, but here is a explanation of what it thought the world was. That could be wrong, but it's still the explanation for why a decision was made at that point. Well, given that, this next part of the system then made this decision based on a whole bunch of data about humans tend to drive on those things. So we think, and we're making that available, and it will be available to authorities who want to know what's going on in the vehicles as well.

BRUCE MCCABE: So when you say explainability is a key part of your strategic direction, is this what you're talking about?

PAUL NEWMAN: It's part of it.

BRUCE MCCABE: Because in my mind it was also that the system would be able to explain to the user its reasoning.

PAUL NEWMAN: It's the same thing.

BRUCE MCCABE: Is it?

PAUL NEWMAN: Yeah well, it's a different rendering of it. So you can render it as where the things were. But we can also use an LLM to say, ‘and because of that I turned left,’ but that LLM could be wrong. An LLM is, a simulation of what a human would say about that. Maybe it's not necessarily exactly understood in the way that we do and maybe that doesn't matter, but it is a string simulator and so, yeah, you can come up with verbal representations of what's going on. But I think it's also a way you can sort of deconstruct ‘this is what we thought the state is.’ So I think it's reasonable to say you would expect, say you were doing on road, that there is a state of a thing called ‘a traffic light and it matters.’

BRUCE MCCABE: A traffic light and it matters.

PAUL NEWMAN: And it matters, what that state is. You'd expect. I did not see the red light. I did see the red light. That seems like a reasonable thing from the explanation. Now there are those that say, oh no, we shouldn't worry about that, that all gets learned. Yes, you could learn it, of course you could, but it seems it would be reasonable to have to explain your decisions based on the things that other users would be using to explain the world, if these systems are going to be working alongside humans, and that's part of the explainability, I think it's also, um, from an explainability perspective, the fixes that you make must also be used in the same language as the explanation of the fault. I think that's really interesting as well.

BRUCE MCCABE: It's interesting and it's hard, isn't it? This is a hard research pathway, but a really good one.

PAUL NEWMAN: This is a generational problem.

BRUCE MCCABE: Yeah, but it's a really important one because that's a rising tide that lifts all boats, if we have good feedback loops in general.

PAUL NEWMAN: I like the way you said that. It's feedback loops.

BRUCE MCCABE: Yeah, for both users and systems. Then everything accelerates, and comfort accelerates as well.

PAUL NEWMAN: Yes, and we need to sort of also separate the comfort from the safety as well.

We can easily get those two things confused right?

BRUCE MCCABE: People get very comfortable very quickly [laughs].

PAUL NEWMAN: Yes, yes, and it's performance from a comfortable perspective is different from, ‘do I think that system is safe,’ and we think very strongly about two paths in there. There's a thing called the safety path and there's a thing called the performance path. The performance path does all your fancy thinking. It's the guy that understands very beautiful accelerations and lane changes and makes long-distance plans. The safety path is the guy that's going, ‘I don't know how you do your highfalutin thinking, but I'm sitting here. If I get into a moment where I can't get you out of trouble, if that's a bad idea, I'm grabbing the wheel. Have you ever flown a plane? Have you?

BRUCE MCCABE: So it's like a meta layer.

PAUL NEWMAN: yeah have you ever flown a plane?

BRUCE MCCABE: I have flown a plane.

PAUL NEWMAN: And you know, when you were being trained, there was a rule that the guy was training you, that it goes, ‘hands off. you just have to let go of everything and they'll make you not die.’

BRUCE MCCABE: Yeah, that's true, there was that. Yes, I wasn't formally trained, I was sitting beside someone and I took the controls. But it was a similar process.

PAUL NEWMAN: It was a similar thing, right? I mean, I remember when I was learning to fly a glider the person behind said, this will be fine, you know, figure it out. But if I say, hands off, just let go of everything and curl your feet up because I'll make you not die. And there were a couple of times where I was really glad that instruction had come through …

… Oh, there goes, a look, there goes another different vehicle, there's a van. That's the E-Transit, it's just going past there.

BRUCE MCCABE: Oh, there you go. We've got robot vehicles circling us, ladies and gentlemen.

PAUL NEWMAN: Yeah, we like that one actually. It’s quite nice in terms a variable vehicle for doing work in airports.

BRUCE MCCABE: This is an e-transit van, fully automated is it? Perfect. And we're in the UK. The transit van is the ubiquitous utility vehicle.

PAUL NEWMAN: What's not to love, hey?

PAUL NEWMAN: Right, where were we?

BRUCE MCCABE: We got onto airplanes very briefly.

PAUL NEWMAN: We were talking about the safety path.

BRUCE MCCABE: The meta layer of safety oversight.

PAUL NEWMAN: Yeah, and it's trying to stay out of the way and not intervene. And it's got a single goal: Don't hit stuff. And have a very, very low, false negative rate. So what it must not do is miss something that it would have intervened for. And so all the statistics around, how would you build that? And it's the ... The way that's constructed is it's not going to make comfortable, amazing plans for you on how to drive, because if it takes over, remember, you're flying, the person who's your instructor, it may not be a great experience those next 10 seconds as they do a course correction and make sure the bad thing doesn't happen, but it's because the fancy stuff hasn't, hasn't worked or has had a fault. So this guy and and it's a nice and that's the guy that's safety rated, which means you can put more sophisticated stuff in the performance path and iterate on that faster.

BRUCE MCCABE: Really interesting and it really ties into what's going on perhaps broadly with AI, where we're looking at modularizing it and having a higher-order supervisionary capability and lower-order functions.

PAUL NEWMAN: Yeah, and we’ve got limited responsibility and redundant responsibility, yeah, nice, I mean I don't buy that there should just be one path through. You might have several views and you might want an arbiter, and you know, and you're making a plan for your best case, your happy path, your fallback, your emergency path. And then there's another guy that's darkly going, I'm just watching, I'm not going to say anything, but I will pass judgment if I can't get you out of this pickle

BRUCE MCCABE: Interesting. Now I'm conscious of your time, yeah.

PAUL NEWMAN: And have we answered any of your questions [laughter]

BRUCE MCCABE: Well, I want to talk now about the future. So have we got a little bit of time? Yeah, we’re still good? Let's just perhaps look forward and really, just with the opportunity hat on, you know, where are the opportunities that excite you in the next 10 years, the particular, if you to prioritize some of this, we've got a universe ahead. But the next 10 years, what excites you in the near term?

PAUL NEWMAN: So I don't think 10 years is the near term, that's just the term.

BRUCE MCCABE: Okay, you pick.

PAUL NEWMAN: Every time I go traveling you see so many. And we're going to just contextualize it in the AV world, right, not other technology?

BRUCE MCCABE: AV particularly, if you want to generalize a bit more ...

PAUL NEWMAN: I'll do AVs and I can do robotics as broader. I think that we will see great progress in the likes of Waymo, in building taxi services in some cities, and that's great, and they're opening that up. I think what that's like, if you like, the Model T Ford. It's the first version and then a whole load of other stuff appears around it and that's great and they're pioneering in that sense. Huge admirer.

But I think what I get really excited is heterogeneous fleets of vehicles right, doing work collectively, autonomously, with humans in the loop in different ways, like the one operator for the 17 vehicles, and you can imagine airports learning to operate differently. I'm super interested in the way in which the explosion in gen AI -- we'll use that term broadly, knowing we're abusing what we mean in there, but broadly the latest stuff.

BRUCE MCCABE: More generalized AI. More repurposable.

PAUL NEWMAN: Generative, not the general.

BRUCE MCCABE: Oh, okay.

PAUL NEWMAN: How that is perhaps what I think the real mark change of where that is is, it's not the technology that's necessary in the vehicle, where all the music is now. It's how you teach. Yeah, it's not so much about the students, it's about the teachers. and so what I'm very, very excited is that you know, great universities have great syllabuses and they've learned to teach really, really well, and I'm really interested in the impact that you get on who teaches the teachers to teach the students, yeah, and that is the real meta loop there, because you know we're seeing astounding results where we might take an AI model from three, four years ago, as like a vintage fellow, and then give it the correct syllabus that's needed. And you know we're getting 400% - 500% improvements on old models simply because you're teaching it properly. And there's a couple of things you can do there. You can go well, let's see how the student's doing and give them more homework. There's a good analogy here with students and professors and teachers and so give them more homework and let's hope they pass their finals better. But you can also analyze the syllabus and go well, they were never going to be really good, right? Because you never told them about Maxwell's equations. So you can then see the holes in the syllabus and you can go find a new syllabus for it. And if you can't find the syllabus, you can synthesize one with Gen AI.

PAUL NEWMAN: So we're building AIs that teach the students, okay, and then we're building AIs that teach the teachers how to teach the students. And then all of that is in another feedback loop that does that adversarially. And this is the how do you drive everywhere? By driving somewhere. Because what we want to do is say, okay, so we've got a port, we've got an airport, we've got a bus route through London, we want to be able to operate on that. So what we want to be able to do is quickly generate syllabuses that are for that bus route, right, so we've got to teach the teachers, to be able to teach the teachers, to teach the students, and then we can assess the students ability on that route and say something strong, very strong, about why we should trust that bus on that route and that allows it to be insured. Because I'm skipping the, ‘does it work everywhere?’ because I'm about to, when you say, yes, I'm going to take it God knows where. That sounds not a now problem.

So the main point is where am I so excited? It's that, yes, I think the technology that's in the vehicles carries on progressing at an astounding rate and different ways that you can plan, and different ways you can perceive, and how you might think about different topologies for learning. But I think the music is around the assurance of these systems from a trust perspective, and that is not a piece of paperwork. It's not, ‘I have followed an engineering process.’ Well, congratulations, of course you should have, but that's not going to cut it. Yeah, there's something different here about how are we going to build technology that gives us confidence in dealing with an open-world problem. If you look at old-school automotive, when I turn the steering wheel, do the wheels turn right? You'd use a V model. You'd use ISO 26262 as a standard for doing this. It's not going to cut it. Some of those things we need to inherit, but there needs to be more than that. So I'm really interested in the technology that's now available and is being built off the vehicle to enable the on-vehicle to be what we wanted it to be, and that's interesting, and that's something that's really, I think, picked up over the last few years and is certainly where most of my headspace is now being spent, because it's about the quality of the teaching that enables the students to pass their exams.

BRUCE MCCABE: And it's about universality and repeatability, right? So any investment in that direction is something that can be applied to all machines that are autonomous that might navigate that context.

PAUL NEWMAN: It's a bit like, I guess, coming back to teachers. Teachers that have been teaching many, many years, can probably teach a lot of the syllabus.

BRUCE MCCABE: And what about big deployments? You know, in my head if I was thinking of a city on this planet that might go, ‘you know what? There's enormous efficiency gains, money to be saved, pollution to be reduced at a systemic level,’ I'd be thinking Singapore or something like this. You know there are governments out there that could go all in.

PAUL NEWMAN: Yes, they are. And particularly interesting to us, of course, is, you know, countries like UAE, Saudi Arabia, they're like well, we've got a hell of a transport problem. I mean, I had an amazing statistic that I think Riyadh's traffic is increasing at 30% a year. That's extraordinary.

BRUCE MCCABE: That's ridiculous.

PAUL NEWMAN: And it's not like there's none now.

BRUCE MCCABE: Are you having conversations with those sorts of people, at that level, that government sort of level?

PAUL NEWMAN: Yeah, we're very interested in that area, absolutely.

BRUCE MCCABE: It's so profound because once you start to solve transportation and interrupt that escalation in traffic, that just doesn't stop. You start to rethink the geography of cities, you start to rethink what you do with space and parking spaces and it's a real estate play.

PAUL NEWMAN: Conversations in UAE, they've got a really strong statement about how much transportation they want to build, but it's across the board. It's in the airports, it's in the ports, it's in bus services. So, super interesting in that sense. And here's the thing that I ... Obviously, you know, I adore the technology, but I also adore the business side of it and, like I want to enable other businesses to grow, because you're not going to find this technology, we want to enable other businesses with this technology to go invent and do something with it, and that's a really exciting scaling opportunity. So if anyone's listening to this and thinking, actually I'm a fleet owner operator, this is definitely coming and I feel like being a co-founder in this field and getting the knowledge of how to do this and be an inventor and owning some of the IP that enables businesses to exploit this technology, shout! Because this is the founding vision of it. It's different from, ‘we'll build a vertical and you will buy car type foo,’ yeah, or ‘vehicle type foo,’ or have this operating system in it. Ours is an entirely different model. Our business model is call us, and if you want autonomy, the answer is yes, in what form? Let's go do it. It's just a very different business model and it's harder in some senses, but but seriously yeah, and the payoffs are huge.

BRUCE MCCABE: I keep thinking of the Amazons, the fedex's, the DHLs as perfect candidates, because they've got global systems, repeatable systems within their massive distribution centers, lots of vehicles …

PAUL NEWMAN: … with the vehicles that you don't know are going to turn up yet, and innovate with us.

BRUCE MCCABE: And they're doing some robotics, but there's so much more!

PAUL NEWMAN: So much more. And where am I? What's around me? What should I do? Common questions. And now, coming back to where we started, what should I share? It's a really interesting question.

BRUCE MCCABE: Let's dig into that a little bit.

PAUL NEWMAN: So this is this point about all vehicles could talk and share the experiences of all vehicles. That could be an N-squared data sharing problem, if you're not careful. That's going to go south really quick. So, what's relevant? And again that's a syllabus question. Of the things I saw, what is relevant? And one way to think about that is, well, what things have I seen that I didn't pass an exam on?

So imagine you're a young engineer and you've gone out into the world, and you've, you've had all your internal tests, you've done that, and you actually you had your weekly tests and your rehearsals with your tutor and then your finals came up and you passed. Woo! I'm a qualified engineer. And you go and you do your first engineering thing, and suddenly you're faced with stuff that's just not on the syllabus, like, what? What's this? What's this? I should have stayed and done a phd or something … So that's one way to sift it. If the things that you see, are different from the syllabus you were given, then that's interesting feedback for syllabus generation. And the fact universities do that. They hear from their graduates what, what are the skills as they talk to employers, what are the skills that you would like us to do? And there's always this balance about do you learn something fundamental or do you learn it on the job. But I think machines have a different opportunity to do that

And that also comes back to the safety case, right? So if, at operating time, the machine says, whoa, this is not like you told me the world was going to look, and I am now converting what I see in real time into a kind of exam question that I may have been asked, and I'm comparing that exam question to questions I know you asked me, and there are millions, and it's not like it. What could that mean? That could be, that you're unknown and unsafe. It's a really big idea.

BRUCE MCCABE: It's really important. And the general AI problem is getting AI to tell you when it doesn't know. To identify its own knowledge gaps.

PAUL NEWMAN: Right. And one way to do that is you go, well, distance from stuff that I know I got right. So if you ask me a question like, what day comes off to Tuesday, and I say Wednesday, and then you ask me does Wednesday come after Tuesday? Those two questions sound really similar. So I feel confident about answering the second question, because you asked me what day comes after Tuesday. I said Wednesday. You said, well done, paul, you're right. And if you swap that question around, it's kind of close to the one I know I passed. But if you say, ‘how much bicarbonate soda do you need to put in a cake?’ What? You've only told me about days of the week. This question, I don't know. I think 11.

I know that's a silly example. I did a stark one on a completely different topic to sound different, but it could be something like … I have rehearsed using generative AIs driving around this area and now suddenly in front of me I am seeing 120 people that are less than four foot tall. Okay, there's a school trip. Okay, you could be fine, right? You hope you would be. But this is now very, very different from rehearsal. Now what could you do if it's very different? Well, the first thing you could do is slow, slow down. And maybe at that point you do say, human, are you okay with this? And you do get a call in.

And again, I think that speaks to why it's so difficult to do personal transport first, because that doesn't scale so well. But if it's like vehicles doing known routes on buses and you've got this sort of like hyper-local model to work out, then that's a credible business plan on how to do those things. You could then get insurance and assurance and it's a really good moment. So, say then the human goes yeah, you're fine, next to that school, you were doing that exactly right. What does that mean? That means I passed another qualification, that I could then extend the qualification for that route. When I see that, because you passed another exam.

Of course what we actually do is bring that back in and then dream on that, do a variation on the theme and generate bajillions of slight perturbations of that. And so all of our training, we put in what we'd call an adversarial loop where we'd go, try and find the nightmares. So you try and find the nightmares for that release of software.

BRUCE MCCABE: Here's 10,000 iterations of the same situation, some of which are completely outside of your experience.

PAUL NEWMAN: Yeah. Imagine you had a really mean teacher, right [laughs] or a good one, I don't know if the two are synonymous. And then you ask a question and you give an answer back. A good teacher homes in on the 30% that was wrong and asks another question to you. It's's customized to you, Bruce, and then after three or four questions, ideally it's got you to a spot where you go. ‘I don't know’ What have they done there? They found out the gap in your knowledge, because you've been put in an adversarial loop of question response, question response, and those answers are being used to find a gap. So then fill that gap in with syllabus generation, synthetically, using a Gen AI, and go oh, by the way, here are 60 homework questions. Do those, you'll get those skills. But you can do that about a place. It's the number seven bus route.

BRUCE MCCABE: You know all of these feedback loops. To me they're all accelerators, they all multiply each other and they all speak to the inevitability of really, really wide scale deployments. And there's a human feedback loop outside of this, I think, which also plays in. When people see things that everyone else is benefiting from, something they never really thought they'd do themselves, they suddenly get much more excited. I remember a transport person who was doing the railway system in Perth, in Australia. I remember hearing him interviewed and he said the main thing he wanted to make sure, was that the lines ran down the middle of the freeways so that everyone on their morning commute could see everyone else going faster than they were. Every day, every day, they're in their cars and they're watching other people going swoosh, coming past them faster.

PAUL NEWMAN: What a great idea.

BRUCE MCCABE: It's brilliant. But it was the psychology. He said, if you want people to use it every day, remind them that it's faster, and I think they're going to see the same thing [with AVs]. They're going to see it's convenient. The pickups and drop-offs are there, the money saved in the warehouse is there. The bus routes are more efficient. The asset utilization is more efficient. It's just going to multiply

PAUL NEWMAN:. The vehicles are different. I think so. And I very rarely … I can't think of anyone in the last 18 months who genuinely says no, that's not going to happen.

BRUCE MCCABE: Absolutely not.

PAUL NEWMAN: The question is, when and where. And it's kind of like saying, yeah, this whole computing thing is going to take off. Now I think there's been hyperbole, I think there's been razzmatazz, but again and again we're in that situation that, as a species, we're endlessly over-optimistic about how soon it's going to be and endlessly massively underestimate how huge it will be.

BRUCE MCCABE: Yes In the end, yeah, yeah. So lots of somewheres, if we try and map it out.

PAUL NEWMAN: You get everywhere by doing somewheres.

BRUCE MCCABE: So lots and lots of somewheres, lots of narrow contexts and interesting contexts that obviously work, combining that world, a more universal layer to that, so those somewheres all lead to everywhere, ultimately.

PAUL NEWMAN: Each somewhere. Or, the 100th somewhere is faster because of the 99th, yeah, and it's faster because of the 98th, and each of those is satisfying actually what a customer wants in my airport, my bus route, my city, my delivery route. And they join up. And then you build the technology to enable that to get faster. So you build yourself a flywheel, which is better than going. I will sit here darkly until it's all done.

BRUCE MCCABE: Yeah. You're going to own the show.

PAUL NEWMAN: Well, we do it with co-founders. That's an interesting thing, hey? We do it with other businesses who want to join in that. But what a privilege, hey?

BRUCE MCCABE: There you go. You're going to enable the show. Yeah, there you go.

BRUCE MCCABE: All right. Well, that's probably a good way to end it.

PAUL NEWMAN: It's been a pleasure, hey?

BRUCE MCCABE: Yeah, it's been wonderful. Thanks for making so much time.

PAUL NEWMAN: Real pleasure. Take care.

 
Previous
Previous

ELECTRIFICATION OF ROAD FREIGHT – WITH DAVID CEBON

Next
Next

FUTURE OF CARBON PRICING - WITH CAMERON HEPBURN