With AI: Trust But Verify
Much like the automation of physical labor via robots and machinery, the automation of mental labor via computing power is promoting anxiety among some, often times understandably, and overexuberance among others, who try to shove AI and machine learning into areas they’re not ready to occupy.
During the recent IME West event in Anaheim, organized by Informa, Pat Baird, head of global software standards at Philips, discussed regulatory standards for AI in healthcare. Within this space, AI touches things as simple as virtual voice attendants on hospital phone systems up to programs that can diagnose disease without human assistance.
To frame the discussion, and in some ways the inevitability of where things are headed, Laird cited a paper published in the medical journal The Lancet in 2019 by Antonio Di Leva. Di Leva discussed an alternate view of AI that would “shift the paradigm from one of human-versus-machine, to human-and-machine,” writing that, “machines will not replace physicians, but physicians using AI will soon replace those not using it.”
Baird then discussed autonomous AI-based diagnostic system used to detect diabetic retinopathy. Built off AI-powered algorithms to detect the condition in images of a retina, the technology recently deployed machine learning to boost accuracy. In April 2018, FDA permitted marketing of the technology. In a release, the agency said:
“IDx-DR is the first device authorized for marketing that provides a screening decision without the need for a clinician to also interpret the image or results, which makes it usable by health care providers who may not normally be involved in eye care.”
From an instance where human intervention isn’t required, Baird pointed to instances where complete trust in AI systems lead people astray. First, during California’s wildfires in 2017, local police had to ask drivers to start using common sense and not traffic apps to direct their commutes after the programs pointed the users to areas engulfed in flames because those roadways currently had no traffic.
Next up, a 35 mph speed limit sign, only slightly modified by a small piece of tape, lead autonomous driving software to read it as 85 mph. A person would think twice about such a speed limit in what’s likely a residential area, but the the algorithm did not.
Finally, Laird offered up the example of autonomous vehicles stopping at yellow lights. While that follows the spirit of the law and is safer than gunning it through such intersections, it overlooks one thing: tailgaters. Where a person would likely glance at their rear view to make sure the driver behind him was equally committed to waiting for the light to change, AI wouldn’t make such a concession.
These examples brought Laird to a discussion of levels of autonomy. Using vehicles as an example, with 6 levels defined, level 0 is no automation with complete human manual control, while in Level 5 no human interaction or attention is needed and the vehicle is fully autonomous. In between are systems as simple as cruise control up to the automation of steering and acceleration, with human interaction/override possible in all but the final level.
Within the injection molding space, multiple machine suppliers offer intelligent systems, which can shift process parameters on the fly in response to changes in ambient plant conditions or the material that are impacting part filling. Levels-of-autonomy-wise, these fall more to the cruise control scale of autonomous vehicles versus the no-human-interaction-required end of the spectrum, but it’s logical in the foreseeable future, that the operator could be removed from the equation.
Before that time comes, we need to think about those “yellow light” scenarios where we intrinsically understand a human’s thought process and interjection but an algorithm might not. “The challenge is that we forget areas that are second nature to us,” Laird said. We’d best remember those areas as we design AI systems for the future.
Can artificial intelligence augment human intelligence?
The numbers beggar belief and the business belies description—Medical Components of America has six full-time employees, six injection machines and lights-out production that’s gone five years without shipping a bad part or missing a delivery.
These four short videos provide a snapshot of what’s going on in plastics resin drying technology. Yes, there’s Industry 4.0, but also lots more to bring higher quality and efficiency to your injection molding and extrusion processes.
Servo drives dominate. Other trends include multitouch screens, adding materials data to process controls, and Industry 4.0 connectivity.