Category Archives: AI

AI and the Future of Work

Getty Images

Partner Content – WIRED Insider
By WIRED Brand Lab for Accenture

 

While no one knows what artificial intelligence’s effect on work will be, we can all agree on one thing: it’s disruptive. So far, many have cast that disruption in a negative light and projected a future in which robots take jobs from human workers.

That’s one way to look at it. Another is that automation may create more jobs than it displaces. By offering new tools for entrepreneurs, it may also create new lines of business that we can’t imagine now.

A recent study from Redwood Software and Sapio Research underscores this view. Participants in the 2017 study said they believe that 60 percent of businesses can be automated in the next five years.

On the other hand, Gartner predicts that by 2020 AI will produce more jobs than it displaces. Dennis Mortensen, CEO and founder of x.ai, maker of AI-based virtual assistant Amy, agreed. “I look at our firm and two-thirds of the jobs here didn’t exist a few years ago,” said Mortensen.

In addition to creating new jobs, AI will also help people do their jobs better — a lot better. At the World Economic Forum in Davos, Paul Daugherty, Accenture’s Chief Technology and Innovation Officer summed this idea up as, “Human plus machine equals superpowers.”

For many reasons, the optimistic view is likely the more realistic one. But AI’s ability to transform work is far from preordained. In 2018, workers are not being adequately prepared for their futures. The algorithms and data that underlie AI are also flawed and don’t reflect the diverse society it’s meant to serve.

How AI Could Grow Jobs: Inventing New Ones, Empowering Existing Ones

While AI will certainly displace some jobs, such displacement has occurred long before AI was on the scene. In the past century, we’ve seen the demise or diminishment of titles like travel agent, switchboard operator, milkman, elevator operator and bowling alley pinsetter. Meanwhile, new titles like app developer, social media director, and data scientist have emerged.

Daugherty and Jim Wilson, managing director of Information Technology and Business Research at Accenture Research have co-authored a book titled Human+Machine: Reimagining Work in the Age of AI. In their view, future (and current) jobs include trainers and explainers. Trainers will teach AI systems how to perform and mimic human behaviors. Explainers will liaise between machines and human supervisors.

Trainers

Chatbots have recently emerged as a new communications conduit for brands and consumers. It’s no secret though that they have often been stiff and offered inappropriate responses. For instance, we might say “It’s raining again. Great,” and humans would recognize the sarcasm. A machine wouldn’t.

Understanding language is one component of perfecting chatbots. Another is empathy. A new wave of startups is injecting the emotional intelligence into chatbot-based communication.

Eugenia Kuyda, cofounder of Replika, said empathetic chatbots like hers rely on human trainers. “In the future I think one of the most interesting areas of knowledge will be knowing human behavior and psychology,” she said. “You have to build chatbots in a way that makes people happy and want to achieve their goals. Without a certain amount of empathy, it’s not going to happen.”

In addition, companies like Facebook and Google use humans to moderate content. Facebook currently employs around 7,500 people for this purpose. Google parent company Alphabet also recently said it planned to have 10,000 people moderating YouTube content.

Explainers

Trainers bring a human element to AI systems, but “explainers” will bridge the gap between the new systems and their human managers.

C-suite executives, for instance, will be uneasy about basing decisions on “black box” algorithms. They will need explanations in plain English — delivered by a human — to ease their concerns.

Legislation is another impetus. The European Union’s General Data Protection Regulation, which goes into effect this year, includes the “right to explanation.” That means consumers can question and fight any decision made on an algorithmic base that affects them.

Such explainers will perform “autopsies” when the machines make mistakes. They will also diagnose the error and help to take steps to avoid similar mistakes in the future.

Empowering Workers, Businesses and Industries

Rather than replacing workers, AI can be a tool to help employees work better. A call center employee, for instance, can get instant intelligence about what the caller needs and do their work faster and better. That goes for businesses and industry too. In another example, in life sciences, Accenture is using deep learning and neural networks to help companies to bring treatments to market faster.

In addition to helping existing businesses, AI can create new ones. Such new business include digital-based elder care, AI-based agriculture and AI-based monitoring of sales calls.

Finally, automation can be used to fill currently unfilled jobs. As Daugherty noted recently, there is a shortage of 150,000 truck drivers in the U.S. right now. “We need automation to improve the productivity of the drivers, the lifestyle of the drivers to attract more people to the industry,” he said.

Changes We Need To Make Today

It will likely take a decade or so until some AI technologies become the norm. While that provides plenty of lead time for the transition, few companies are taking action now to train their workers. Another little-noticed problem is that the AI systems themselves are being created with data and algorithms that don’t reflect the diverse American society.

Regarding the former, Accenture research shows business leaders don’t think that their workers are ready for AI. But only 3% of those leaders were reinvesting in training. At a Davos meeting held by Accenture, Fei-Fei Li, an associate professor at Stanford University and director of the school’s AI lab, suggested using AI to retrain workers. “I think there’s a really exciting possibility that machine learning itself would help us to learn in more effective ways and to re-skill workers in more effective ways,” she said. “And I personally would like to see more investment and thought going into that aspect.”

Another issue to address in 2018 is the lack of diversity among the companies creating AI. As Li noted, this lack of diversity “is a bias itself.” Recent research from MIT has underscored this point. MIT Media Lab researcher Joy Buolamwini said she found evidence that facial recognition systems recognizing white faces better than black faces. In particular, the study found that if the photo was of a white man, the systems guessed correctly more than 99 percent of the time. But for black women, the percentage as between 20 percent and 34 percent. Such biases have implications for the use of facial recognition for law enforcement, advertising and hiring.

As such research illustrates, AI may present itself as an alien force of disruption, but it’s actually a human invention that reflects its creator’s flaws and humanity. “The effect of AI on jobs is totally, absolutely within our control,” Cathy Bessant, chief operations and chief technology officer, Bank of America, said in her Davos chat. “This isn’t what we let AI do to the workforce, it’s how we control its use to the good of the workforce.”

This story was produced by the WIRED Brand Lab for Accenture.

Cognigy Hires Automation and RPA industry veteran Dennis Walsh

March 18, 2019  3:36:08 PM – Press Release

 

March 18, 2019, Düsseldorf, Germany. Cognigy, the market leader in Conversational AI, proudly announced today a strategic addition to their senior management team. Dennis Walsh joins the founding management team of Cognigy as President of its US operations.

walsh-cognigy

Walsh has a deep experience in delivering automation solutions to enterprise organizations. He brings 20 years of high-level performance in the Business Process Automation (BPA) and Robotic Process Automation (RPA) markets. Previously, Walsh was President of Redwood Software. There, he built the US and APJ operations and established a highly-successful OEM, reseller relationship with industry giant SAP. Walsh’s experiences also include building sales, marketing, partner and delivery operations at Sitelite, a startup Management Services Provider, as well as sales leadership roles at Tivoli Systems / IBM.

“I’m thrilled to be joining Cognigy at this stage of their growth. They clearly have a superior solution which delivers immediate value to large enterprises”, Walsh said. “Their focus on delivering an open-architected, all-encompassing enterprise solution, where today only departmental solutions exist, is very appealing to CEO’s, CIO’s, CMO’s who wish to realize the potential of Conversational AI in their organizations.

“We couldn’t be more pleased to have Dennis join our team” said co-founders Philipp Heltewig and Sascha Poggemann. “Dennis brings invaluable experience in the BPA and RPA marketplaces along with exceptional management skill and experience which are important for us at this stage of growth in our company”.

 

About Cognigy: 

Cognigy is a leader in the Conversational AI marketplace. The Cognigy solution delivers an  Enterprise Conversational platform enabling organizations to build complex, integrated cognitive bots on a single platform. Their solution helps companies rein in “bot sprawl” and delivers the most advanced level of Natural Language Understanding and enterprise application integration in the industry. Leading companies in the USA and EMEA have standardized on the Cognigy platform to accelerate their adoption of Conversational AI.

 

Learn more about conversational AI and Cognigy? Schedule a demo today or send us your questions…
Martina yazgan

Contact
Martina Yazgan

COGNIGY GmbH
Speditionstr. 1
40221 Düsseldorf

 

How we’ll invent the future, by Bill Gates – 10 Breakthrough Technologies 2019 (Part 2, The List)

By Bill Gates for MIT Technology Review – February 27, 2019

As well as his introductory essay, read Bill Gates’ conversation with Editor-In-Chief Gideon Lichfield. Below are his picks for the 10 Breakthrough Technologies:

Robot dexterity

Nicolas Ortega
Robot dexterity
  • Why it matters If robots could learn to deal with the messiness of the real world, they could do many more tasks.
  • Key Players OpenAI
    Carnegie Mellon University
    University of Michigan
    UC Berkeley
  • Availability 3-5 years

Robots are teaching themselves to handle the physical world.

For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.

But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.

One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.

It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.

___________________________________

We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real warehouse or factory. But if researchers can reliably employ this kind of learning, robots might eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed. —Will Knight

New-wave nuclear power

Bob Mumgaard/Plasma Science and Fusion Center/MIT

Advanced fusion and fission reactors are edging closer to reality.

New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.

Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.

___________________________________

There has even been progress on fusion. Though no one expects delivery before 2030, companies like General Fusion and Commonwealth Fusion Systems, an MIT spinout, are making some headway. Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived, high-level waste, it should face much less public resistance than conventional nuclear. (Bill Gates is an investor in TerraPower and Commonwealth Fusion Systems.) —Leigh Phillips

Predicting preemies

Nenov | Getty
Predicting preemies
  • Why it matters 15 million babies are born prematurely every year; it’s the leading cause of death for children under age five
  • Key player Akna Dx
  • Availability A test could be offered in doctor’s offices within five years

A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.

Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.

Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.

Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.

The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.

___________________________________

The technology behind the blood test, Quake says, is quick, easy, and less than $10 a measurement. He and his collaborators have launched a startup, Akna Dx, to commercialize it. —Bonnie Rochman

Gut probe in a pill

Bruce Peterson
Gut probe in a pill
  • Why it matters The device makes it easier to screen for and study gut diseases, including one that keeps millions of children in poor countries from growing properly
  • Key player Massachusetts General Hospital
  • Availability Now used in adults; testing in infants begins in 2019

A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.

Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.

Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.

So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.

Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-­inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.

The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.

___________________________________

The little probe will help researchers answer questions about EED’s development—such as which cells it affects and whether bacteria are involved—and evaluate interventions and potential treatments. —Courtney Humphries

Custom cancer vaccines

Paper Boat Creative | Getty
Custom Cancer Vaccines
  • Why it matters Conventional chemotherapies take a heavy toll on healthy cells and aren’t always effective against tumors
  • Key players BioNTech
    Genentech
  • Availability In human testing

The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor

Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.

By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.

The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.

Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.

A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.

In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.

___________________________________

The two companies are designing new manufacturing techniques to produce thousands of personally customized vaccines cheaply and quickly. That will be tricky because creating the vaccine involves performing a biopsy on the patient’s tumor, sequencing and analyzing its DNA, and rushing that information to the production site. Once produced, the vaccine needs to be promptly delivered to the hospital; delays could be deadly. —Adam Piore

The cow-free burger

Bruce Peterson/Styling: Monica Mariano
The cow-free burger
  • Why it matters Livestock production causes catastrophic deforestation, water pollution, and greenhouse-gas emissions
  • Key players Beyond Meat
    Impossible Foods
  • Availability Plant-based now; lab-grown around 2020

Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.

The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.

By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.

Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.

The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.

Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.

___________________________________

The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat.

Beyond Meat has a new 26,000-square-foot (2,400-square-meter) plant in California and has already sold upwards of 25 million burgers from 30,000 stores and restaurants. According to an analysis by the Center for Sustainable Systems at the University of Michigan, a Beyond Meat patty would probably generate 90% less in greenhouse-gas emissions than a conventional burger made from a cow. —Markkus Rovito

Carbon dioxide catcher

Nico Ortega
Carbon dioxide catcher
  • Why it matters Removing CO2 from the atmosphere might be one of the last viable ways to stop catastrophic climate change
  • Key players Carbon Engineering
    Climeworks
    Global Thermostat
  • Availability 5-10 years

Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.

Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.

In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.

But once you capture the carbon, you still need to figure out what to do with it.

Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)

Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.

___________________________________

Still, if it’s used in synthetic fuels or sodas, the carbon dioxide will mostly end up back in the atmosphere. The ultimate goal is to lock greenhouse gases away forever. Some could be nested within products like carbon fiber, polymers, or concrete, but far more will simply need to be buried underground, a costly job that no business model seems likely to support.

In fact, pulling CO2 out of the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with climate change. But given how slowly we’re reducing emissions, there are no good options left. —James Temple

An ECG on your wrist

Bruce Peterson

Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.

Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.

ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.

An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.

___________________________________

The health-device company Withings also announced plans for an ECG-equipped watch shortly after.
Current wearables still employ only a single sensor, whereas a real ECG has 12. And no wearable can yet detect a heart attack as it’s happening.

But this might change soon. Last fall, AliveCor presented preliminary results to the American Heart Association on an app and two-­sensor system that can detect a certain type of heart attack. —Karen Hao

Sanitation without sewers

TheDman | Getty
Sanitation without sewers
  • Why it matters 2.3 billion people lack safe sanitation, and many die as a result
  • Key players Duke University
    University of South Florida
    Biomass Controls
    California Institute of Technology
  • Availability 1-2 years

Energy-efficient toilets can operate without a sewer system and treat waste on the spot.

About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.

Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.

In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.

Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.

___________________________________

One drawback is that the toilets don’t work at every scale. The Biomass Controls product, for example, is designed primarily for tens of thousands of users per day, which makes it less well suited for smaller villages. Another system, developed at Duke University, is meant to be used only by a few nearby homes.

So the challenge now is to make these toilets cheaper and more adaptable to communities of different sizes. “It’s great to build one or two units,” says Daniel Yeh, an associate professor at the University of South Florida, who led the NEWgenerator team. “But to really have the technology impact the world, the only way to do that is mass-produce the units.” —Erin Winick

Smooth-talking AI assistants

Bruce Peterson
Smooth-talking AI assistants
  • Why it matters AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
  • Key players Google
    Alibaba
    Amazon
  • Availability 1-2 years

New techniques that capture semantic relationships between words are making machines better at understanding natural language.

We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.

In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

___________________________________

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao

Bill Gates: How we’ll invent the future (Part 1, Introductory essay)

The thinking behind this year’s list of 10 Breakthrough Technologies began with the plow.

 

By Bill Gates for MIT Technology Review – February 27, 2019

 

I was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.

My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.

But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.

Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more—it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.

Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the former. And our efforts have paid off: worldwide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today.

Because we’re living longer, our focus is starting to shift toward well-being. This transformation is happening slowly. If you divide scientific breakthroughs into these two categories—things that improve quantity of life and things that improve quality of life—the 2009 list looks not so different from this year’s. Like most forms of progress, the change is so gradual that it’s hard to perceive. It’s a matter of decades, not years—and I believe we’re only at the midpoint of the transition.

To be clear, I don’t think humanity will stop trying to extend life spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.

We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting. If I had to predict what this list will look like a few years from now, I’d bet technologies that alleviate chronic disease will be a big theme. This won’t just include new drugs (although I would love to see new treatments for diseases like Alzheimer’s on the list). The innovations might look like a mechanical glove that helps a person with arthritis maintain flexibility, or an app that connects people experiencing major depressive episodes with the help they need.

If we could look even further out—let’s say the list 20 years from now—I would hope to see technologies that center almost entirely on well-being. I think the brilliant minds of the future will focus on more metaphysical questions: How do we make people happier? How do we create meaningful connections? How do we help everyone live a fulfilling life?

I would love to see these questions shape the 2039 list, because it would mean that we’ve successfully fought back disease (and dealt with climate change). I can’t imagine a greater sign of progress than that. For now, though, the innovations driving change are a mix of things that extend life and things that make it better. My picks reflect both. Each one gives me a different reason to be optimistic for the future, and I hope they inspire you, too.

My selections include amazing new tools that will one day save lives, from simple blood tests that predict premature birth to toilets that destroy deadly pathogens. I’m equally excited by how other technologies on the list will improve our lives. Wearable health monitors like the wrist-based ECG will warn heart patients of impending problems, while others let diabetics not only track glucose levels but manage their disease. Advanced nuclear reactors could provide carbon-free, safe, secure energy to the world.

One of my choices even offers us a peek at a future where society’s primary goal is personal fulfillment. Among many other applications, AI-driven personal agents might one day make your e-mail in-box more manageable—something that sounds trivial until you consider what possibilities open up when you have more free time.

The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.

That, I think, is a future worth working toward.

 

Behold the IoT Invasion: Eight Reasons to Plug In (Slideshow)


John McDonald, CEO, ClearObject | Mar 12, 2019 for IndustryWeek

An IoT integrator shares what big trends to capitalize on in the next few years

 

By 2021 consumer spending on digital products and services is predicted to double, and the Internet of Things (IoT) space grew just as fast in 2018. Every industry is looking for new, advanced ways to meet production and consumer demands in a world of instant gratification. These trends are some of the things we see as an IoT systems integrator that will continue in the forefront of 2019 and beyond.

IoT and data are critical for today’s operations in any industry. It’s no longer feasible to ignore the benefits for efficiency, productivity and customer satisfaction that are results of using advancements in IoT and data. Each and every industry must adopt new and inventive methods like IoT and machine learning to analyze transactions and data in any form whether it’s a car that can detect driver fatigue, preventive maintenance sensors, or nanotechnology to monitor food sources.

Click on Start Slideshow for eight areas that should see serious growth in the next few years:

Start Slideshow

John McDonald is the CEO of Fishers-based ClearObject and chair of the Indiana Technology and Innovation Policy Committee.

The Future of the Network: Get More or Get Smart

When planning for the future of the network, we can do what we have always done or we can “Get Smart.”

 

craig

By Craig Mathias, Principal, Farpoint Group | Oct 29, 2018 for ITPro Today

 

I recently did a presentation for an FCC advisory committee that’s looking into how the increasing volume of computing at the edge of the Internet is driving demand for network bandwidth. I opened my talk with a chart from the Cisco Visual Networking Index, an online document that forecasts network bandwidth demands over the next few years. So, let me cut to the chase: Cisco sees aggregate annual global bandwidth demand on the order of 292 exabytes in 2019. That’s 292 times the IP traffic volume–combined fixed and still-rapidly growing mobile–of 2000, with demand still growing and mostly driven by streaming video. The immediate conclusion is that we need to get started on adding bulk to the Internet–and our own organizational networks–to handle this load.

As it turns out, there are two key schools of thought on how to approach the network-capacity challenge. I call them Get More and Get Smart. Let’s look at each.

Get More

Get More is the obvious direction for dealing with the growing capacity challenge, and it’s really what we’ve been doing with networks all along to enhance capacity–more of the same, but faster, better and cheaper.

Get More has historically been a very reliable strategy, based on the benefits that accrue from improvements in basic technologies (primarily chips and protocols) that regularly and reliably appear at lower prices or at least with constantly improving price/performance ratios. This is the faster/better/cheaper noted above.

All we need to do, then, is simply add more of the components we already know, love and understand–like Wi-Fi access points, Ethernet switches and WAN capacity–as required, either to address growing demand or to take advantage of those newer technologies or, really, both. This path, then, really is easy: Buy what you need; add more as you need more; realize better value without much (if any) effort beyond writing a check and installing the gear (including new software, like management and analytics); and overprovision, as we must always to assure the headroom required for day-to-day growth and time-bounded traffic, as well as end user productivity. Indeed, what could be easier?

Get Smart

To be fair, the alternate strategy, Get Smart, really isn’t easier today. However, it might be much easier, and cheaper, over the long run, as the technologies involved mature. Get Smart is based on taking advantage of new technologies that present themselves in the form of paradigm shifts–getting the same job (networking) done, but in new and more productive ways.

Here are the leading Get Smart directions today:

  • SDN, SD-WAN and SD-LAN: Software-defined networking enables networks to adapt intelligently to changes in traffic patterns, security challenges and overall growth. Think “softer networks”–both wireless and wired–coupled with improved management.
  • NFV: Network Functions Virtualization moves many networking functions into high-performance but otherwise traditional computers, substituting the flexibility of software for the specialized hardware that, again, needs to be replaced via upgrades from time to time. NFV is analogous to that more familiar form of virtualization, virtual machines, that makes better use of computer power that might otherwise go to waste. You’ll frequently see SDN and NFV mentioned–and, increasingly, implemented–together, as software is at the core of each.
  • Extreme Virtualization: Indeed, the only real hardware required in most networks in the future will be Wi-Fi access points, Ethernet switches to interconnect and power those APs and what few wired elements remain, and a WAN interface device (which will almost certainly be implemented using SDN and NFV) that is analogous to today’s router but much more configurable and flexible. Everything else–most notably, management, analytics and other operational support, but also traffic management, controllers and even security–is virtualized, along with computing and storage, into the cloud.
  • Desktop Virtualization: We will likely also move much end user processing and data to the server side of the link, and again into the cloud, and thus minimize the amount of traffic we’ll need to move in the first place. Lightweight protocols implementing the remoting of screen and other user I/O, like RDP and VDI, are much more efficient, in most cases, than simply implementing client/server in the cloud. Some processing will of course be done on mobile devices, but the essentially shared and collaborative nature of today’s IT solutions minimizes the amount of computing power really required in handsets, tablets and notebooks–many of which will be thin clients, like Chromebooks.
  • AI and ML: Artificial intelligence and machine learning are going to yield far-reaching benefits across all of IT and applications in general, but in networks we’ll see much more powerful and proactive analytics engaged via a feedback link between multi-tenant cloud-based analytics and management consoles. All of this will enable most problems to be resolved automatically, even before operations pros are aware of them. Network operations will center on policy specification, rather than the low-level tweaking of router settings via a CLI.

So, how can IT management decide which of these two strategies–Get More and Get Smart–will be the best alternative in their own individual cases? Begin with the information central to operations, and how and thus where this data is most efficiently and productively stored and processed. Then think about how the creation, distribution and management of this information will evolve over time and how IT can best carry out this mission.

It’s also important to conduct a financial analysis of the two options.

Ideally, Get Smart will improve the productivity of network operations staffs, whose associated costs are a huge chunk of operating expense (OpEx). And, unlike the capital expense (CapEx) at the heart of Get More that improves over time in the form of enhanced price/performance, OpEx only grows as people get more expensive (but not necessarily more productive) over time.

This is why Get Smart is so interesting and why, we believe, this strategy will become the dominant of the two choices. Add in improved performance, reliability, security and availability, and Get Smart can’t lose–over the long run, anyway.

The right tools and techniques for any given case derive from a complete consideration of the above elements. In many cases, just more of the same will work fine. After all, that’s what most end user organizations have always done, and, as long as end users are happy with network performance and budgets remain bounded, all really is well.

But, in an increasing number of situations, adding intelligence and not just bulk will yield, we believe, far greater returns over time–in the form of improved reliability, availability, costs, capacity and productivity–especially that of end users–included in the bargain. Smart, after all, always triumphs over brute force. It just takes a while.

Digital Transformation, Dynamic Threats and Growing Accountability

March 1, 2019

By Mark Sangster, Chief Security Strategist at eSentire, Inc., contributor to SecurityMagazine.com

 

Businesses today accept the presence of cyber risks. In fact, 70 percent assume a business-altering event will occur in the next few years (FutureWatch Report), but often have a more difficult time identifying specific risks, key factors and mitigation strategies. Worse, the board or senior leadership often makes assumptions about the safety of the firms that is overly optimistic when compared to confidence ratings of security practitioners.

The difference between awareness and understanding is driven by the communication gap between the board and executives steering the business, and the security experts close to the problem. Both parties struggle to comprehend the other’s needs and responsibilities.

A firm’s risks stem from a handful of business aspects, including the firm’s participation in high-risk industries, its appetite for emerging technologies, and willingness to properly invest in targeted security practices. While this sounds obvious at first, it’s lost when the line of sight from the security practitioners to the board is over the horizon.

This article will explore board-level concerns, key drivers to invest in security, and how emerging technologies outpace the evolution of security technologies and services. The data presented in this article was collected in late 2018, through third-party research that surveyed 1,250 security executives, managers and practitioners. Data was collected from the United States, Canada and the United Kingdom. Participants were equally represented across various industries and company sizes, ranging from less than 100 employees to 5,000 employee or more. Read the full FutureWatch Report.

Major Attacks Are an Assumption

Unanimously, business leaders such as the CEO, board members and technical executives (CIO) alike predict a major cyber-attack in the next two to five years. Over 60 percent of respondents assume a major event will occur. Interestingly, 77 percent of CEO and board respondents consider their organization prepared for such an event. As expected, technical leaders are approximately 20 percent more likely to predict an attack and are 10 percent less optimistic than their business peers in their organization’s preparedness.

Senior leadership fears operational disruption, reputational damage and significant financial losses over regulatory penalties as top consequences of a major security event.

While business leaders show a confidence in their firm’s ability to manage a security breach, the devil is in the details. Only 29 percent of respondents indicated that their high-value or high-profile information is not adequately protected. And two-thirds of respondents are not confident that their cybersecurity programs match their peers, nor that their programs are appropriately resourced.

The Cybersecurity Rosetta Stone

Boards and security practitioners still struggle to translate their concerns and objectives. Only one-third of business leaders are confident in their security executive’s ability to monitor and report on cybersecurity programs and 66 percent worry that these programs are not aligned to business objectives.

IT and security leadership sentiments echo this concern. Most organizations struggle to show the value of IT security spend to senior management, including status reporting difficulties. Aligning to enterprise risk management confounds over half of businesses, along with the ability to managed external risks with third-party vendors and the growing complexity of regulatory compliance.

On the positive side, progress has been made over the last few years. The CISO is no longer the least interesting person to the board, until they are the most important person.  Over half of respondents indicate their board is very familiar with the security budget (51 percent), overall strategy (57 percent), policies (58 percent), technologies (53 percent), and currently review current security and privacy risks (51 percent).  Moreover, line of sight from the CISO to the board is more direct. Forty-five percent of security officers report to the board or CEO, 33 percent continue to report to the CIO and a small handful (10 percent) report to a privacy or data officer.

Moreover, nearly two-thirds of security budgets are set to rise in 2019. Spend on the security side is still reactionary. While regulatory requirements is in the basement of the board’s concerns, it tops the list for security practitioners. A security teams spend is generally reactive to client demands, major technology purchases, a major security event or near miss, and the adoption of emerging technology.

Emerging Technology: A Double-edged Sword

IT and security teams find themselves in a difficult position between meeting the demands of the business to adopt emerging technologies that offer competitive advantage, while also carrying the burden of mitigating the risks that come along with new deployments.

Nearly three-quarters of respondents are currently using cloud services or plan to deploy cloud services in the next six months, with financial services, manufacturing and healthcare leading the adoption rate. Only law firms lag in their cloud adoption. Artificial Intelligence (AI), Internet-of-Things (IoT) and Industrial IoT (IIoT) top the list behind cloud.

Cloud security adoption is the priority, followed closely by identity and access management, threat detection and response, and endpoint detection and response. Security Information and Event Management (SIEM) moves beyond a compliance tool and now plays a role in the greater detection and response portfolio.

More than half of telecom, information technology, financial services and manufacturers invested in securing their cloud services. Similarly, financial services, healthcare and manufacturing also emphasize threat detection and response investments. These industries are equally investing in identity and access management as a response to a more distributed workplace. Again, law firms are significantly less likely to adopt these technologies.

Digital transformation is here to stay and brings with it a drive to always evolve and constantly change. Economics demand that vendors constantly improve and offer new features and technologies which outpaces our understanding of the associated risks. We focus on the benefits while assuming vendors have resolved the security issues. For example, cloud technology tops the list of security priorities today, but AI and IoT/IIoT are on track to surpass cloud as the primary risk concern in less than two years.

This challenge will only increase over the coming years as 5G facilitates a ubiquitous mosaic of always connected devices. Risk associated with emerging technologies becomes more concerning as adoption rates accelerate, compressing the time in which organizations and vendors can adapt and develop appropriate security controls and deploy protective solutions.

Most Susceptible to Risk: Law Firms, Transportation and IT

Law firms lead when it comes to risks associated with external actors and attacks and their ability to report status, show value and meet internal risk standards and regulatory requirements. Transportation and IT firms report higher than average levels of risk. Financial services tend to run just below industry averages across external attacks and internal or industry requirements.

Digital Transformation Outpaces Current Security Approaches

Digital transformation touches every facet of business operation and redefines how businesses engage with their customers. The emerging technologies underpinning this tectonic shift must constantly expand capabilities and adapt to survive in a competitive environment. Current security approaches are not fluid enough to keep pace with adoption of emerging technology and platforms.

Today, most firms identify their primary security posture as leveraging prevention technologies and device management. Firms that leverage a predictive security model such as threat hunting, machine learning, and device analytics reduce their risk by thirty percent. Less than one-fifth of firms identify as predictive. The trend is consistent across all industry segments with financial and healthcare services leading the charge and law firms lagging.

Firms adopting predictive security models are better able to identify never-before-seen threats and have engaged rapid response capabilities to reduce the risk of a business-altering event. Over the next two years, older preventative models drop to less than one-third, while predictive threat hunting will more than double to 40 percent. This trend correlates with the shift in business drivers away from regulatory dominance toward business-centric considerations such as operational disruption, reputational damage, and, of course, financial losses.

Interestingly, advanced firms are more apt to adopt emerging security technologies such as endpoint, threat detection and response, identity access management, and cloud security. Moreover, mature firms aggressively leverage SaaS and are more likely to adopt 100 percent cloud-based security services than firms using a device-management model. Outsourcing is a palatable alternative to recruiting and retaining threat hunting talent from a pool that cannot support the growing demand.

Digital Transformation, Dynamic Threats and Growing Accountability

Digital transformation continues to expand a larger and more fluid attack surface from the advanced methodologies used by well-resourced adversaries like organized criminals and nation-state actors. Regardless of industry, businesses operate in a world with ever-increasing accountability to protect their clients’ confidential information, adhere to state legislation, comply with privacy laws and meet the growing complexity of overlapping regulatory obligations.

This triad of risk demands that IT, security practitioners, and leaders align with business governance objectives, while senior leadership acknowledge their role in establishing expectations and providing resources to adequately protect the business, its investors, employees and customers.

We’ve left the world of prescriptive regulations as a measure of security end state. Many organizations recognize that the financial loss associated with operational disruption and reputational damage outweigh the penalties set out by regulators. In the future, organizations will likely move to a perspective driven by their clients. In this state, brand and reputation will form the barometer by which a company’s security performance is ultimately measured. Protecting the client will mean by extension, protecting their data and services, avoiding operational disruption and resulting financial losses.


Author: Mark Sangster, Chief Security Strategist at eSentire

Mark Sangster is an industry security strategist and cybersecurity evangelist who researches, speaks and writes about cybersecurity as it relates to regulations, ethical obligations, data breach incident response and cyber risk management.

« Older Entries