AI has seen a renaissance over the last year, with developments in driverless vehicle technology, voice recognition, and the mastery of the game 'Go,' revealing how much machines are capable of.
But with all of the successes of AI, it's also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are 'directly related to the mistakes produced by the intelligence such systems are designed to exhibit.' According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system.
Here is TechRepublic's top 10 AI failures from 2016, drawn from Yampolskiy's list as well as from the input of several other AI experts.
Today we count down the top ten most disastrous programming mistakes, commonly known as bugs. Thanks for watching.
1. AI built to predict future crime was racist
The company Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called 'Minority Report-esque' by Gawker (a reference to the dystopian short story and movie based on the work by Philip K. Dick), was accused of engaging in racial bias, as black offenders were more likely to be marked as at a higher risk of committing a future crime than those of other races. Another media outlet, ProPublica, found that Northpointe's software wasn't an 'effective predictor in general, regardless of race.'
More about Innovation2. Non-player characters in a video game crafted weapons beyond creator's plans
In June, an AI-fueled video game called Elite: Dangerous exhibited something the creators never intended: The AI had the ability to create superweapons that were beyond the scope of the game's design. According to one gaming website, '[p]layers would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces.' The weapons were later pulled from the game's developers.
3. Robot injured a child
A so-called 'crime fighting robot,' created by the platform Knightscope, crashed into a child in a Silicon Valley mall in July, injuring the 16-month-old boy. The Los Angeles Times quoted the the company as saying that incident was a ' freakish accident.'
4. Fatality in Tesla Autopilot mode
As previously reported by TechRepublic, Joshua Brown was driving a Tesla engaged in Autopilot mode when his vehicle collided with a tractor-trailer on a Florida highway, in the first-reported fatality of the feature. Since the accident, Telsa has announced major upgrades to its Autopilot software, which Elon Musk claimed would have prevented that collision. There have been other fatalities linked to Autopilot, including one in China, although none can be directly tied to a failure of the AI system.
5. Microsoft's chatbot Tay utters racist, sexist, homophobic slurs
In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called 'Tay.ai' on Twitter last spring. 'Tay,' modeled around a teenage girl, morphed into, well, a ' Hitler-loving, feminist-bashing troll'—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make 'adjustments' to its algorithm.
SEE: Big data can reveal inaccurate stereotypes on Twitter, according to UPenn study (TechRepublic)
6. AI-judged beauty contest is racist
In 'The First International Beauty Contest Judged by Artificial Intelligence,' a robot panel judged faces, based on 'algorithms that can accurately evaluate the criteria linked to perception of human beauty and health,' according to the contest's site. But by failing to supply the AI with a diverse training set, the contest winners were all white. As Yampolskiy said, 'Beauty is in the pattern recognizer.'
7. Pokémon Go keeps game-players in white neighborhoods
After the release of the massively popular Pokémon Go in July, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. According to Anu Tewary, chief data officer for Mint at Intuit, it's because the creators of the algorithms failed to provide a diverse training set, and didn't spend time in these neighborhoods.
8. Google's AI, AlphaGo, loses game 4 of Go to Lee Sedol
In March 2016, Google's AI, AlphaGo, was beaten in game four of a five-round series of the game Go by Lee Sedol, a 18-time world champion of the game. And though the AI program won the series, Sedol's win proved AI's algorithms aren't flawless yet.
'Lee Sedol found a weakness, it seems, in Monte Carlo tree search,' said Toby Walsh, professor of AI at the University of New South Wales. But while this can be considered a failure of AI, Yampolskiy also makes the point that the loss 'could be considered by some to be within normal operations specs.'
9. Chinese facial recognition study predicts convicts but shows bias
Two researchers at China's Shanghai Jiao Tong University published a study entitled 'Automated Inference on Criminality using Face Images.' According to the Mirror, they 'fed the faces of 1,856 people (half of which were convicted violent criminals) into a computer and set about analysing them.' In the work, the researchers concluded that there are 'some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.' Many in the field questioned the results and the report's ethics underpinnings.
10. Insurance company uses Facebook data to issue rates, shows bias
And, finally, this year England's largest vehicle insurer, Admiral Insurance, set out to use Facebook users' posts to see there was a correlation between their use of the social media site and whether they would make good first-time drivers.
While this isn't a straight AI failure, it is a misuse of AI. Walsh said that 'Facebook did a good job in blocking this one.' The endeavor, which was called 'firstcarquote,' never got off the ground because Facebook blocked the company from accessing data, citing its policy that states companies can't 'use data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.'
As evidenced by these examples, AI systems are deeply prone to bias—and it is critical that machine learning algorithms train on diverse sets of data in order to prevent it. As AI increases its capabilities, ensuring proper checks, diverse data, and ethical standards for research is of utmost importance.
Next Big Thing Newsletter
Be in the know about smart cities, AI, Internet of Things, VR, autonomous driving, drones, robotics, and more of the coolest tech innovations. Delivered Wednesdays and Fridays
Sign up today Sign up today
Also see...
Consumers and businesses depend on software every day for a variety of functions, and when bugs strike or errors occur, the consequences can be staggering.
More about cybersecurity
In a recent report, software testing company Tricentis analyzed 606 software fails from 314 companies to better understand the business and financial impact of software failures. The report revealed that these software failures affected 3.6 billion people, and caused $1.7 trillion in financial losses and a cumulative total of 268 years of downtime.
Here are five takeaways from the company's Software Fail Watch 2017 report:
1. Software failures vary by industry![]()
Last year the retail and consumer technology space reported the most software failures, in large part thanks to problematic smartphone updates and the security/hacking exploits intended to target them.
Failures in both services and utilities and the entertainment industries saw a three-year low in media coverage against their 2015 and 2016 rankings. Transportation industry bug stories also dipped slightly.
Public services and healthcare reported 30% lower failures than in in 2016, yet still had significant issues: 'We saw countless stories of hacking and tampering with international election processes, WiFi vulnerabilities that exposed the data of billions of people, IT issues that caused tens of thousands of letters that went unsent to patients and doctors, millions of dollars in overpaid medical billing —the list goes on and on.'
SEE: Software quality control policy (Tech Pro Research)
2. Software failures vary by environment![]()
Environment plays a role in dealing with software failures. On-premises software had the most prevalent amount of issues last year (300 instances). Mobile/cloud software came in second at 193 instances, and embedded software proved most reliable (relatively speaking) with just 113 failures examined.
3. Some types of software failures are more prevalent
Tricentis found that the bulk of software failures (331 instances) were produced by software bugs. Another 136 instances were caused by security vulnerabilities and 54 the result of usability glitches.
A software bug constitutes a coding error or fault in the program or system which can produce errors, unintended behavior or app failures. A usability glitch is more of a design flaw based on developmental shortcomings, such as producing an error which the user is unable to acknowledge or leaving out a function which might be useful such as providing the ability to paste content into a field.
Based on the above results, clearly, bugs are a significant source of the problem in developing stable and error-free software.
SEE: Job description: Quality assurance engineer (Tech Pro Research)
4. Software failures have a negative impact on company stock and brand
Software failures can be devastating to company value and reputation. UK-based loan company Provident Financial lost 1.7 billion pounds (about 2.4 billion American dollars) worth of market value last year after their scheduling application such that barely half of their loan debts were collected when due. This ended up costing the company 120 million pounds (170 million American dollars) in profit loss and the debacle is considered a record-breaking loss.
From a brand value perspective, consumer technology, public service, and services and utilities suffered the most negative repercussions from software failures last year. Health care and finance suffered the least amount of negative brand impact.
5. Software testing is inadequate
Herein lies the crux of the problem. Software failures occur because software testing sometimes allows problems to slip through the cracks. Software bugs were the most common reason behind these failures, but proper testing would have eliminated these issues, as well as at least some of the security vulnerabilities and usability glitches.
Wolfgang Platz, founder and CPO of Tricentis, observed in the report that software development has advanced significantly over the past decade, yet testing strategies have not evolved accordingly.
'Until recently, the ideas and tools that could make a difference were only fringe players. That dynamic is changing.' Platz states that analyst reports like the 2017 Gartner Magic Quadrant for Test Automation indicate that legacy tools (developed two decades ago, some which are still heavily in use today, were never intended to support high levels of quality in the rapid release cycles of today) 'can no longer keep pace with software development. A new era is upon us: one that requires us to rethink our software testing strategies, tool stacks, and priorities, and reimagine what we can accomplish in the software industry.
However, with the right strategies, approaches to automation, and better alignment with the business, I am confident that trend will change.'
Stackify has detailed guidelines for modern software testing techniques which recommend asking a series of questions at the conclusion of testing for maximum efficacy: where the overall application works, all features function as they should, the application can handle heavy volume, whether there are risky vulnerabilities that could endanger users, and whether the usability factor of the application makes it compelling or frustrating.
Stackify also suggests formulating a software testing philosophy based on developing a testing culture, formulating a series of standard testing preparation tasks, establishing test methodologies, improving the efficiency of the test process and how best to utilize the end results.
App of the Week Newsletter
Don't waste another second searching for IT and business apps--we've got you covered. Our featured App of the Week might boost your productivity, secure your email, track career goals, and more. Delivered Thursdays
Sign up today Sign up today
Also see:
Comments are closed.
|
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |