You don't need to be an 'investor' to invest in Singletrack: 6 days left: 95% of target - Find out more
....I know,I know.... but this is a pretty good read from MIT researchers.
This is what humans think a driverless car should prioritise anyway:
https://www.bbc.co.uk/news/technology-45991093
It's a classic question but it becomes really interesting now as the decision needs to be baked into an algorithm rather than taken by a human in the heat of the moment. I'm not really sure that it matters what we think we would do as humans as I think Mercedes have said already that they will always prioritise the passengers in the car and it's difficult to see how you would get into a car that would do otherwise once it was available
But it's a tough one, especially when liability is involved
Surely the people inside the car are already at an advantage by being inside the car that presumably has been built with various safety aids to prevent people inside from dieing anyway?
In an unavoidable collision the car should take the hit, after trying to slow down a bit, rather than ploughing into innocent pedestrians.
I like Guy Martin’s take on this: the car should always sacrifice its passengers - they chose to be in the car, not the pedestrians.
Who is going to buy a car that prioritises the safety of other road users over the safety of the driver and their loved ones?
In an unavoidable collision the car should take the hit
The but everyone misses is where the car avoided the things that got a human to that point in the first place.
How often are people getting into this difficult moral situation?
I read a version of this somewhere which basically said that in the vast majority of cases, the solution is simply for the car to brake in a straight line as hard as it can. The idea that they have to incorporate some sort of ethical judgement in an algorithm is a bit of a conceit. Personally I think the car should aim straight at Schroedinger's Cat and put it in or out of its misery once and for all.
Think the last two posts here have it. Glad to see not all minds baffled by bullshit (like mine generally is) 🙂
How often are people getting into this difficult moral situation?
Ideally you never get there, or at least as you say it should be way less that normal. That doesn't mean you can ignore it. If you are writing the software you have no choice but to have to deal with the situation and to decide what will happen, even if it is incredibly rare. It isn't an ethical judgement for the software - it is just following what it has been told to do. The ethical decisions are taken by those that have to apply the weights to the different outcomes.
That doesn’t mean you can ignore it
True but it is generaly sensationalised and not presented in a way that the evidence reflects
Ideally you would show the hundreds of decisions made by observing things that many humans missed and the steps it takes to prevent getting into those situations.
Then the current stats for human screw ups.
Then be open and honest about logic implemented if possible.
Can the software author be prosecuted for dangerous driving?
The research I red also showed that in different parts of the world people set their priorities differently.
Can the software author be prosecuted for dangerous driving?
I don't know the answer to that but it is definitely being discussed along with liability for things like software testing to make sure it works as designed. None of this is easy
Then be open and honest about logic implemented if possible
This is where the difficulty lies, no-one likes to be open about it. I worked in aid once where someone went mad at another agency because they wouldn't fly a plane in to rescue someone who was having difficult with a birth. To them it was risky and without the plane the child would be lost. To the person in the other agency who had to make the decision they knew that if they did that every time someone thought it necessary then they would close quickly and stop doing all the stuff that actually made a difference
As you said, you need to be open but it's not easy. We are not used to having a value put on our lives compared to others as that doesn't feel 'human'
But AI isn't an algorithm so it isn't programmed by a human, so these autonomous cars will end up deciding for themselves and we'll never know how they came to that decision.
The occupants of the car decide to be in the car....so what? the pedestrian decides to be in the middle of a busy road. If they looked first before stepping out on the road then the accident wouldn't happen at all. At least in a driverless car there is no 'driver' to blame for speeding or making silly decisions as the driverless car would be driven according to strict algorithms and rules, so the fault of the accident then becomes the pedestrians no question, whereas in the real world with human drivers there is always the question of the judgement of the driver...were they driving too fast, were they distracted or not paying attention, were they doing something silly at the time etc. Not so with a computer.
But you would have thought in these days of big data the car would be able to, in a fraction of a second, evaluate many scenario's that would minimise the overall injuries...it would know the structural integrity of the car therefore the force the occupants should be able to survive, the angle to hit the pedestrian to increase their likelihood of surviving, even choosing to swerve into a lamppost or wall if the speed was survivable. The computing power and speed of evaluating different scenarios of a powerful computer driving a care are many orders of magnitude greater than a human driver who's decisions are compromised by silly emotions. So overall the outcomes will be better in the driverless car situation.
The author of the code that controls the car might very well be liable...all the more reason why with things like this need globally agreed rules. The case was when Senna died the Italian courts tried to get Frank Williams on a manslaughter charge because Italian law insists someone has to be at fault - it didn't recognise what an accident was...someone had to be at fault. Clearly in this case we can't have that sort of ambiguity and variation across different countries.
We are not used to having a value put on our lives compared to others as that doesn’t feel ‘human’
How does 5pts and an awareness course sound. We should obviously hold them as accountable as human drivers.
But AI isn’t an algorithm so it isn’t programmed by a human, so these autonomous cars will end up deciding for themselves and we’ll never know how they came to that decision.
Source for this one? You can record every sensor input s all the telemetry and the actions it took and the code that executed that.
Surely a human would be able to accelerate out of trouble if they had a nice big 3l engine with lots of hp.*
* Or so the advanced godlike drivers of the UK would try and tell you...
mike this might give a bit of insight - https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/
so the fault of the accident then becomes the pedestrians no question,
Ermm. No.
As nice as it is thinking of us code monkeys as infallible superheroes sadly the truth is rather far from that (even not getting into use of neural networks which can develop their own bad habits).
To take your example. A human driver may well be held responsible for hitting the pedestrian on the road. Although admittedly would stand a good chance of getting away with it under the jury system.
What happens when a driverless car is unable to avoid a collision with another driverless car?
Car hits pedestrian..... Nope still cars fault
Hows about if everyone knew, and was of absolute certainty that no car would even try to stop if they stepped out in front of it? Wouldn't that make it always the pedestrians fault?
(and the law of natural selection would kick in a the SMobies would slowly get whittled down by the killer cars.... 😉
After-all there are people who put their heads in crocodiles mouths and when they eventually and inevitably get bitten, no one blames the crocodile do they??
How about if we accepted that car is not king.
The world would be a better place.
I like Guy Martin’s take on this: the car should always sacrifice its passengers – they chose to be in the car, not the pedestrians.
Yeah well he’s right and it’s basic philosophy. If you want to know why just listen to Rob Newman’s ****ing excellent description of fat man on a bridge.
There is no question that this is not some reductionist utilitarian question. People are not commodities. Code monkeys have nothing to do with this. It is not a complex algorithm. All risk and outcome must only affect the occupants of the vehicle. Until such a point vehicles are not strictly autonomous.
A stat to consider
On an average day in the UK between 5 and 6 people will have died on the roads.
The current system is poor, if it can be improved it should be.
We are just being softened up for the idea that driverless cars will not be much safer than humans, if at all, and will kill people both inside and outside cars routinely. The level at which they do so will be based on economics not ethics.
Really? Is there evidence so far that they are less safe?
Given it does not get distracted, does not get tired, does jot make inconsistent decisions, does not suffer from delusions of grandure
The most dangerous part of a modern car is behind the wheel.
The machine isn't making any decisions by itself btw, it's just using it programming, programmes that have been created by humans.
All this AI stuff is still at it's core is machines doing what humans tell them to do. AI is good advertising mind and the technology is getting better at an increasing rate, but the responsibility still lies with humans not machines.
The machines only do what we tell them. The car isn't deciding anything.
seosamh77Subscriber
The machine isn’t making any decisions by itself btw, it’s just using it programming, programmes that have been created by humans.
All this AI stuff is still at it’s core is machines doing what humans tell them to do. AI is good advertising mind and the technology is getting better at an increasing rate, but the responsibility still lies with humans not machines.
The machines only do what we tell them. The car isn’t deciding anything.
That's not how this kind of AI works. It is not like a normal computer programme written by humans.
I read a version of this somewhere which basically said that in the vast majority of cases, the solution is simply for the car to brake in a straight line as hard as it can. The idea that they have to incorporate some sort of ethical judgement in an algorithm is a bit of a conceit.
Yes, it's an ethical thought experiment, but it doesn't have much practical relevance to the real world situation. In 99.9% of cases, identifying the dangerous situation as quickly as possible and braking hard will be the answer, and a machine will do that faster and more reliably than a human.
Anyone who uses UK roads and observes how human drivers currently behave will know that an autonomous car would not have to be incredibly intelligent to perform better...
Anyone who uses UK roads and observes how human drivers currently behave will know that an autonomous car would not have to be incredibly intelligent to perform better…
After having to commute through Bradford for the last few years I'd be more than happy for all cars to be driverless. They're all driverless at the moment, but not in the same way!
Your not half joking Glenn.
The autonomous car will be driving appropriate for conditions and at an appropriate speed.
Not the speed the human decides it needs to be doing because it has bad time keeping .
Exactly, the car, and all the other cars will be far less likely to get into a difficult situation, as they don't get tired, angry, or distracted.
They will never speed or make bad overtakes or try to squeeze into gaps that are too small. This alone will make them far safer for everyone.
Also, AI is not to be confused with automated vehicles, which some seem to be doing. Automated vehicles as they are, simply look at all the data and react according to what the programming tells them to depending on the data from sensors etc. Maybe even the sensors from nearby automated vehicles too if they can be networked somehow.
That would make things even more safe, for example of there's an incident, every automated car for a few miles around would receive the data that other cars have had to slow down or stop at a particular spot and either slow down accordingly or even change route to avoid the problem.
That’s not how this kind of AI works. It is not like a normal computer programme written by humans.
It's still people defining it all. The bots aren't defining anything. Self aware computers aren't a thing and aren't likely to be.
In the OPs question, they may have 'generated' a bot that can tell the difference between the 2 scenarios or a number of scenarios, but it's still not making the decision as to who to kill. That'll be up to people to define those parameters.
If it isn't, then this moral question is irrelevant to humans, just let the machines decide, if they can...
Even if the computer is programed by machine learning rather than specific algorithms, somebody still has to tell it which behaviours are good and which are bad.
That’s not how this kind of AI works. It is not like a normal computer programme written by humans.
Machine learning covers a whole range of different disciplines. The overlap of machine learning with self driving cars is unclear though. Some companies seem to be going one route or another although even then it isnt clear whether they are just going one specific route or mixing and matching different variants of Ai (say some expert system overlaid with some neural networks).
Even if you just went the NN route then it can get complex (aka way beyond my skills) in terms of how you train the network. Take one approach and the human influence will be very high but even the other alternative approaches will take in our prejudices. There is (almost certainly a made up) story about an early take at using AI to identify tanks. The story goes they had it accurately recognising tanks and everyone was impressed but then they tried it in the field and it did shit. After investigation they found the photos of tanks had been done in cloudy conditions and the control photos being on a sunny day so it had honed in on overcast as being the sign of being tanks,.
Currently there is a bit of an issue with people being recognised by facial recognition systems. They have mostly been built on convenient tests subjects, so white and some asian, and fail badly with other races.
But you would have thought in these days of big data the car would be able to, in a fraction of a second, evaluate many scenario’s that would minimise the overall injuries
If by "big data" you mean stuff running in a datacentre somewhere, then latency is going make that very hard. Along with network outages and tunnels.
Think about how long it takes for Alexa to respond when you ask it to play some music - now imagine taking that long for your AI to respond while traveling at 100km/h in the rain, at night, while your children are watching on online movie in the back, and the car in front has just slammed on the anchors.
oldnpastit
Member
you'd imagine the car would contain the necessary computing power and info in that scenario and wouldn't need to access data across the network. Would be a bit mental a car implementing that kind of decision in an area where there's only 3G available! 😆
The car is likely to update itself regularly mind.
The car is likely to update itself regularly mind.
That is already happening with some cars and does pose a potential risk when the car is in mode 2/3 as opposed to 5. Couple of accidents do seem related to the car making different decisions after an update which is a tad problematic when the driver although in theory being completely alert in reality isnt.
I think robust updating systems are already becoming a thing, you already have the concept of the "evergreen" browser. ie it self updates, so the updating will unlikely be left to individuals. If that kinda things isn't already prevalent in loads of places, i'm sure it will be in short order.
(I only learned about this recently, but it's been a thing for a while I think.)
Anyhow, I think with cars, to say they will be completely accident free is fanciful, the question is really will they be less accident prone that humans.
I think robust updating systems are already becoming a thing, you already have the concept of the “evergreen” browser. ie it self updates
Yeah but the difference is how dangerous the code change is. There is a reason for example for windows 10 enterprise they are allowed to defer changes. You cant test for all scenarios so the question is how many you miss and how dangerous they end up being,
the question is really will they be less accident prone that humans.
That is the question. We know people are shit at driving. The risk with self driving cars is instead of having a particular problem with one person it gets replicated across several thousand.
I am getting more and more cynical about self driving cars. Unfortunately the culture of "move fast and break things" vs, say, the aviation industry "try for redundant systems and test as much as possible" seems to be winning. I dont really trust software developers and I certainly dont trust the testers.
Unfortunately the culture of “move fast and break things” vs, say, the aviation industry “try for redundant systems and test as much as possible” seems to be winning. I dont really trust software developers and I certainly dont trust the testers.
It has to be a middle ground, aviation is testing in a bubble whereas these systems need to be tested where its busy.
The testing should be open to scrutiny and all results available for audit.
The other question is do you trust the testers and deva more than the drivers on the roads?
IMHO it's actually far more difficult to get a car to drive autonomously on a busy motorway with roadworks, high speeds, poor lane markings, and crap drivers than it is to do it at relatively low speeds in a city or town traffic situation. After all, if something goes wrong it can just stop, and if average speed is ~20 mph this will be within it's own length, more or less.
The whole ethics argument is a red herring. Can someone give me a genuinely plausible scenario where the car is choosing between young and old, or prioritising the driver over a pedestrian? The reality is that if designed (trained?) properly it shouldn't have got into this situation in the first place, which makes it a moot argument.
I can't say I've ever been faced with a tricky moral decision when driving. Autonomous cars will be safer in the long run as they won't take reckless decisions; can see in the dark; and don't seek to punish other drivers.
Surely apart situations of low grip an automated car will be driving so safely that this shouldn't happen.
Obviously I realise that while we have driver driven cars as well it might have no choice but to make a decision.
I think another aspect of this is who actually wants driverless cars? I'm highly dubious about how popular they'll be in the short to mid term at least anyway.
The hype is all well and good but I don't really see these getting forced upon up everyone.
And there's the fact that people enjoy driving. So these things are going to have to integrate with current system they wont take over. Least not for a good while imo. More likely to be introduced and a security feature in the short to mid term?
Like playing f1 with all the driving aids turned on? 😆
Imagine the MOT for a driverless car.
Apart from the normal car stuff, your MOT garage will have to check the pc that runs all the software. Is it up to date? What's the hard drive (or SSD) like? Are all the sensors working correctly?
This isn't going to happen anytime soon, I think.
I think another aspect of this is who actually wants driverless cars? I’m highly dubious about how popular they’ll be in the short to mid term at least anyway.
Me, I'd take the train over driving if it was more convenient. If I have to do any distance then why should I be doing the work, I could be having a kip or getting some work done while getting there.
And there’s the fact that people enjoy driving.
and a lot of them do so at the detriment to other peoples safety.Things like looking at the scenery while navigating what they think is a quiet road.
I'd be happy to let them drive though if they can pass the same moral tests as an autonomous car and get below the accident rates we specify.
Apart from the normal car stuff, your MOT garage will have to check the pc that runs all the software. Is it up to date? What’s the hard drive (or SSD) like? Are all the sensors working correctly?
You mean like checking all the systems in a modern car? Will be quite quick really, it would probably be in near constant comms with the main system so not really an issue, imagine if the car itself detected a fault and stopped itself from driving?
I was talking to somebody last week about vision systems and using them to measure tread depth, in that case the car could prevent you using it until the safety issue is rectified, not a annual spot check
At least the car will be able to give a record of what happened in the accident, currently the driver just has to say, didn't see them in court as the reason he ploughed into a group of pedestrians rather than slow down.