You will also receive a complimentary subscription to the ZDNet’s Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.
Video: Google puts AI team’s work to good use in Android P.
Having been trained on 46 billion bits of electronic health data from patients, Google’s AI is now showing promise in the field of predicting health outcomes for patients.
Researchers from Google Brain and Stanford University recently published a paper in Nature detailing their work using big data and deep-learning methods to predict the fate of inpatients.
The researchers used the algorithms to predict important outcomes, such as death; readmissions to measure quality of care; a patient’s length of stay to measure of resource utilization; and a prediction of a patient’s diagnoses to see how well clinicians understood a patient’s problems.
The team took a different approach to building predictive statistical models by considering a ‘representation’ of all a patient’s health records, including clinical notes, rather than removing most of a patient’s information from the analysis.
As noted, 80 percent of the effort in creating an analytic model is in cleaning the data, so it could provide a way to scale up predictive models, assuming the data is available to mine.
They also developed a way to show clinicians what exact data its model “looked at” for each patient it predicted an outcome for.
This technique would allow clinicians to check whether a prediction is based on credible facts and address concerns about so-called ‘black-box’ methods that don’t explain why a prediction has been made.
Google started working on the project with UC San Francisco, the University of Chicago Medicine, and Stanford Medicine last year, which gave them access to a vast trove of de-identified medical records to validate there deep-learning models.
In total, they had access to health records on 216,221 adult patients who were hospitalized for 24 hours or more, which produced over 46 billion data points.
“We demonstrate that deep-learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization,” the researchers note.
As Bloomberg reports, medical experts have been impressed by Google’s ability to dig out data from notes on PDFs or handwritten notes on old charts, which previously have been difficult to incorporate into predictive models. Google’s system is both faster and more accurate than previous techniques.
The study has created excitement at Google because it may open a new door to the lucrative healthcare market, where it could one day sell AI-as-a-service to time-constrained clinicians.
The research showed that Google’s models are better at predicting a range of outcomes and metrics for patients than traditional methods.
On inpatient mortality, for example, it scored 0.95 out of a perfect score of 1.0 compared with traditional methods, which scored 0.86.
In a blogpost, Google downplayed the idea that its AI would replace human clinicians’ role in diagnosing patients.
“We emphasize that the model is not diagnosing patients — it picks up signals about the patient, their treatments and notes written by their clinicians, so the model is more like a good listener than a master diagnostician,” the researchers note.
Google Research gets a rebrand to become Google AI. The message: Everything Google does intersects with AI somehow.
Google won’t bid to renew its Project Maven contract with the Pentagon after it expires in 2019.
The site “Learn with Google AI” features a free course on machine learning that’s available to anyone.
Aside from ensuring access to machine learning expertise, the tech giant says there needs to be “fair and responsible” development of artificial intelligence to ensure societies can truly benefit from the technology.
Video: These AI bots are solving customer problems
AI bots and tools seem to permeate everything we do. From providing automated service to customers, answering simple queries, and detecting how customers are feeling, chatbots — sometimes with human intervention — can seem like they are taking over.
Now, a new AI-powered contact center solution aims to make complex customer service queries easier to manage.
Enterprise applications company IFS has released its customer engagement suite. Its three tools provide an omni-channel experience for customers who want to use self-service options online.
It uses AI-technology to offer a speech-recognition, self-service front-end that helps customers complete simple tasks like rescheduling service appointments or providing an automated mechanism to check appointment times.
Customers can find answers to questions using the website-hosted AI chatbot.
There are two options for deployment: A visual overlay for IFS’s Field Service Management (FSM) software and its Planning, Scheduling, and Optimizing tool (PSO).
The aim is to reduce the burden on the contact center and improve customer engagement.
As human validation is critical, human agents using its desktop software can see all customer communications, including calls, emails, chat, messaging, or social media.
Agents can have a complete view of the customer across the channels. Often we still prefer talking to humans when we have a tricky issue.
Paul White, director of IFS Customer Engagement, said: “Today’s end users expect instant gratification when it comes to customer service, and they want to be able to communicate in ways that are most comfortable and convenient for them, whether social media, email or messaging,”
Chatbots that provide a good ROI for business will pay for themselves in streamlined operation and data gathering.
Customers expect instant gratification when they encounter customer service, so the more we interact with AI on all levels, the more important it is for customer engagement to reflect this.
Moving from simple to complex problem solving, and lightening the load for the over- stretched contact center agent is the next step for AI. The company that can deliver this level of AI sophistication will win across the enterprise
With all the hype around AI, can businesses use this technology in a practical way across the enterprise?
With GDPR in force, are brands ready for the shift towards using first-party customer data coupled with AI?
AI and machine learning can produce background checks on people and companies in minutes instead of days.
Microsoft yesterday declared the Windows 10 April 2018 Update fit for business and now running on 250 million PCs. But despite the rocky upgrade for some users, the Redmond company insists its AI made the record fast roll out a responsible one.
Microsoft boasted that the April 2018 Update was the fastest version to reach 250 million devices since Microsoft launched Windows 10 in 2015, shifting to the Windows-as-a-service model.
That means the Windows 10 April 2018 Update is now running on about 36 percent of the nearly 700 million monthly active devices on some version of Windows 10.
It’s significantly less than the 50 percent adoption reported by Windows-focused ad analytics firm AdDuplex reported last month, however the firm was on the right track when it said it was the “fastest spreading Windows 10 update by far”.
Microsoft says the 250 milestone was reached in “less than half the time” it took the previous feature update, the Fall Creators Update, to hit that number.
While fast adoption is generally a positive signal, the reaction to AdDuplex’s finding was not. Influential Windows watcher, Paul Thurrott, said the numbers were “irresponsibly worse” than he expected due to numerous glitches that users have encountered after or while updating. These have affected Chrome users, Avast antivirus users, machines with certain Intel SSDs, and a range of Dell’s Alienware devices.
But Microsoft insists its combination of telemetry data and artificial intelligence (AI) actually made this update both “safe and fast”, allowing it to identify problems early and block updates to hardware it will cause problems for. It did this for Alienware devices and Intel SSDs that clashed with the update.
“Our AI approach has enabled us to quickly spot issues during deployment of a feature update, and as a result has also allowed us to go faster responsibly,” the company said in yesterday’s blog.
“When our AI model, feedback or telemetry data indicate that there may be an issue, we quickly adjust and prevent affected devices from being offered the update until we thoroughly investigate. Once issues are resolved we proceed again with confidence. This allows us to throttle the update rollout to customers without them needing to take any action.”
Microsoft also addressed issues that its AI and telemetry miss, such as the recent black screen and reboot chaos that Avast users had when moving to the April 2018 Update.
Microsoft claims it detected that issue “within 24 hours of it first appearing” and nipped it in the bud by blocking the Windows update to potentially affected devices.
“We immediately blocked all PCs that could be impacted by this issue from being updated, and communicated to customers within 24 hours, including an initial work around. In the next 24 hours, in cooperation with Avast, Microsoft identified an element of the Avast Behavior Shield that conflicted with the April 2018 Update. Avast immediately released a fix to prevent this issue from further occurring, enabling us to continue to safely roll out the April 2018 Update to those devices.”
Microsoft posted its answer on its community forum a day after ZDNet reported what was then a suspected but unconfirmed clash with Avast and the Windows 10 update. However, the computer repair company that first tied the issue to Avast discovered the problem on May 22, which generated plenty of discussion on Reddit in the three days prior Microsoft and Avast fixing it.
Microsoft is stepping up its work to apply artificial intelligence (AI) techologies to the retail/point-of-sale space, according to a new report.
Reuters reported on June 14 that Microsoft has a group within its AI team that’s working to apply computer vision and some of its “intelligent edge” work to possibly compete with Amazon in the checkout-free retail space.
Amazon Go is a brick-and-mortar concept store that replaces cashiers and checkout lines with computer vision and artificial intelligence
Microsoft has a number of Internet of Things (IoT) and AI services which potentially could be applied to the retail space. Microsoft has been increasing its focus both on IoT endpoints (sensors, embedded devices, cameras, etc.) and the cognitive services — such as Azure image-processing/vision, face recognition, speech and search — that can connect to these endpoints,
The company also has retail/point-of-sale solutions that it markets through its Dynamics 365 for Retail software and service offerings.
And at its Build 2018 conference earlier this year, Microsoft introduced a new package of sensors called “Project Kinect,” which will provide developers a way to embed a camera and related sensors into robots, drones and industrial equipment and automatically get hand tracking and high-fidelity spatial mapping. (The camera in Project Kinect is believed to be the same camera which will be in the next HoloLens.) The tag line for Project Kinect is “bringing AI to the edge.” The package of sensors will be available in 2019, like the next HoloLens.
Microsoft has worked with partners to target the retail industry, rather than compete with them head-to-head the way that Amazon is doing.
I’ve asked Microsoft for comment on the Reuters report. No word back so far.
Update: A Microsoft spokesperson said the company does not comment on rumors and speculation.
Video: The steps to launching AI in your organization
Automation technologies — from AI to robotic process automation (RPA), physical robotics, and more — are transforming business processes and operating models. But most companies don’t have the competencies to implement automation technologies successfully. And so we created RQ — the robotics quotient — to help digital and technology leaders make better investments in the prerequisites to success with automation, AI, and robotics.
Read also: Your next coworker could be a robot
RQ measures the ability of individuals and organizations to learn from, adapt to, collaborate with, trust, and generate business results from automated entities, including software like RPA, AI, physical robotics, and related systems. Across more than nine months of research, what Forrester learned from enterprise organizations is this: People, leaders, and organizations must all bring something to the table when preparing to deploy AI or automation. And the competencies of those people, leaders, and organizations must be refracted through the lens of trust, which varies by technology. We call this the PLOT framework.
The PLOT framework is key to self-assessing areas for improvement. In our self-scoring tool (which is an Excel spreadsheet embedded inside the report), clients can assess RQ across 39 different dimensions, scoring a current state and a desired (yet plausible within 6 to 12 months) state. Using this tool, you can identify the areas that need most improvement and the organizational competencies you need to acquire in order to succeed with RPA, AI, DPA, physical robotics, and the like.
People require emotional, logical, and technical skills. Our people evaluation derives not only from interviews and data but from 30 years of research into emotional intelligence (EQ) as applied to human-machine interactions. People high in RQ possess the ability to engage in sophisticated information processing and task completion by understanding, adapting to, collaborating with, and exchanging data and insights with intelligent machines.
Leaders must balance vision with adaptability and trust. Not only must leaders cultivate the right skills and inclinations among their employees (people), they must change their leadership style to suit the era of AI and automation. For instance, willingness to change — while not acting whimsically or on an ad hoc basis — is crucial to automation technology success. Often, goals and measurements will shift midproject, and leaders must find an effective way to adapt.
Organizations need new roles, superior processes, and training. We find that informal, underfunded initiatives don’t work as well as formalized, clear changes to organizations. For example, there are new roles and skills that must be introduced into nearly every organization that deploys automation and AI — and these roles and skills are even common to the deployment of physical robotics.
Trust varies by technology. For all their many commonalities, disparate automation technologies present different challenges and success factors — most importantly as it relates to building trust. Depending on how transparent (or opaque) and how deterministic (or probabilistic) the software system is, humans will bring a different level of instinctive trust to the interaction.
Learn more about why RQ is the next major assessment for digital transformation by watching Forrester’s latest webinar. [subscription required]. For additional information on RPA and how it can help companies achieve enterprise-wide digital transformation, tune in to Forrester’s recent podcast with vice president and principal analyst Craig Le Clair.
Artificial intelligence (AI) is an enabler of technology rather than being a business itself, according to Cisco CEO Chuck Robbins, who told ZDNet that AI is critical in sitting behind much of the networking giant’s portfolio.
Speaking with ZDNet during Cisco Live 2018 in Orlando, Robbins pointed towards Cisco’s Talos discovering the VPNFilter attack last month.
“Across our portfolio … if you look at how we deal with security and the number of threats, there’s an element of AI and machine learning, and we wouldn’t be able to do what we’re doing [without it],” Robbins told ZDNet.
“When our threat researchers found the VPNFilter attack, there was a lot of information and a lot of compute power going into discovering these things on a global basis, so it’s important there, it’s important in intent-based networking, it’s important in our collaboration portfolio, it’s important everywhere.
“Some companies will decide that AI is a business for them. For us, it is an enabler of every piece of technology that we build.”
Robbins added that while many companies are mislabelling things as AI, Cisco is more “realistic” about its actual definition.
“We’re realists about what technology is really doing, and I personally think a lot of what is being called AI today is simply massive datasets with incredibly intelligent algorithms being processed very quickly,” he argued.
Calling AI and machine learning “pervasive” across the business, Cisco’s EVP of Networking and Security David Goeckeler also pointed to Talos’ discovery of VPNFilter, attributing this to AI.
“That’s a very good example where with our threat intel research, we actually then reach out to governments and we coordinate activities of how to protect people from cyber threats,” he told media at Cisco Live.
“So there’s an enormous amount [of AI] that goes on across the board on cybersecurity.”
Goeckeler boiled down Cisco’s approach to AI as being an “enormous dataset that we apply machine and human intelligence to find out where threat actors are in the world, and we push policy back into the infrastructure”.
Cisco is particularly focused on utilising AI and machine learning across its security portfolio, he said, and has been for “a very long time now”.
“We’re streaming real-time telemetry to a central point in the network, and then we’re applying intelligence, we have recommendation engines, when we see something we’ll recommend what the fix should be, so all of that is live across the networking portfolio today as well, and then you’ve got the intersection of these too, like … encrypted traffic analytics, so essentially taking networking data plus some security data and you’re mixing them together and you’re using inference to figure out what is malware,” Goeckeler explained.
“You can’t inspect it anymore because it’s encrypted, but if we collect enough data of the behaviour of traffic, we can infer with very, very high probability and very, very low false positives what is malware, so that’s a perfect example of where we’re using very, very advanced learning techniques kind of intersected with the datasets we have both on the network data and the security data.”
Speaking on Cisco’s $270 million acquisition of Accompany last month, and its decision to appoint CEO Amy Chang as head of Cisco’s collaboration business — including the combined Spark and WebEx offering — Robbins said AI was a big part of this.
“I think right now what Amy [Chang] brings is a real deep analytics, AI, software expertise, and I think if you look at where we’re going to — how we’re going to work that portfolio in the future — it really is going to be around bringing in intelligence to create more robust experiences for those that are using our platforms,” Robbins said.
Disclosure: Corinne Reichert travelled to Cisco Live in Orlando as a guest of Cisco
Google outlined its artificial intelligence principles in a move to placate employees who were worried about their work and research winding up in U.S. weapons systems.
Guess what? It’s already too late. There’s no way that Google’s open source approach and its headline principle to not allow its AI into weapons is going to mesh. Chances are fairly good that the technology already open sourced is in some fledgling weapon system somewhere. After all, TensorFlow and a bunch of other neural network tools are pretty damn handy.
Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.
And that’s all true. It’s also true that any technology can be used for good and evil. And that’s the real pickle to Google’s AI approach, which sounds good in theory, but carrying it out is going to create a few issues.
What happens when an AI approach that’s good is open sourced and used for evil? And who’s definition of evil is it anyway?
Google’s seven principles go as follows:
That last item may be the trickiest. Google is supposed to gauge how likely its technology can be adapted for harm. Google’s goal is worthwhile, but the bad guys can innovate well too.
Google concludes with how it won’t pursue AI that can cause overall harm, be used in weapons, spy on people and violates human rights.
Of course, Google won’t set out to do harm, but technologies are adapted for evil all the time. If Google wants to really keep a lid on AI for evil it may want to reconsider open source. Once the code is released publicly, Google can’t put the AI genie back into the bottle or force anyone to adhere to its principles.
Stitch Fix, a personal styling service powered by machine learning, delivered strong third quarter financial results on Thursday.
The San Francisco-based company reported net income of $9.5 million, or nine cents per share, on revenue of $$316.7 million, an increase of 29 percent from a year ago.
Wall Street was looking for earnings of just three cents per share on revenue of $306.4 million. Stitch Fix shares were up as much as six percent in after market trading.
Elsewhere on the balance sheet, Stitch Fix said it had 2.7 million active clients, up 30 percent from the same time last year. The customer increase is a testament to the Stitch Fix business model and its reliance on data science and machine learning algorithms.
Through these technologies, Stitch Fix has created a data-driven customer feedback loop that’s helping the company close the gap between customer information and experience.
“We continue to balance growth and profitability, demonstrated by our ability to consistently deliver top-line growth of over 20 percent even as we invest in category expansions, technology talent, and marketing,” said Stitch Fix CEO Katrina Lake. “Our third quarter results demonstrate continued positive momentum for Stitch Fix and the power of our unique ability to deliver personalized service at scale.”
Weeks after facing both internal and external blowback for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles that explicitly states it will not design or deploy AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Google committed to seven principles to guide its development of AI applications, and it laid out four specific areas for which it will not develop AI. In addition to weaponry, Google said it will not design or deploy AI for:
While Google is rejecting the use of its AI for weapons, “we will continue our work with governments and the military in many other areas,” Google CEO Sundar Pichai wrote in a blog post. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”
Google’s contract with the Defense Department came to light in March after Gizmodo published details about a pilot project shared on an internal mailing list. Thousands of Google employees petitioned the contract and some quit in protest. Google then reportedly told its staff it would not bid to renew the contract, for the Pentagon’s Project Maven, after it expires in 2019.
In his blog post, Pichai said the seven principles laid out Thursday “are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
The seven principles state that AI should: be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles.
While Google’s work with the Pentagon came under scrutiny, other major companies are also facing questions about the ethical principles guiding their AI development: Amazon, for instance, has been called out by the ACLU for providing facial recognition tools to law enforcement.