The White House’s task force on artificial intelligence expects to release this spring an updated version of the AI research and development plan the Obama administration launched two years ago.
“Hopefully in early spring you’ll be seeing an updated version of that R&D strategic plan, as well as the process that we’re going to be using at the federal level for tracking our progress in those investments,” Lynne Parker, the assistant director for artificial intelligence at the White House’s Office of Science and Technology Policy (OSTP), said Tuesday at a Center for Data Innovation event in Washington.
Parker, who served as co-chair of the task force that created the Obama-era R&D plan, said the Trump administration is “very pleased with the gist” of the 2016 strategy, but the task force found it needed a few tweaks.
“Certainly, the fact that industry these days is investing significantly in AI R&D is something we can’t ignore, so we want to make sure that our federal investments are in areas where we can ensure that we are looking at challenges that are not going to be solved by the open market,” she said.
The co-chairs of the White House’s Select Committee on Artificial Intelligence met Nov. 30 to discuss the next steps for “refreshing” the Obama-era AI R&D.
Those steps, Parker said, include looking to “ensure that we are leveraging our federal resources across the government to order to accelerate our advances and our national leadership in AI.”
The select committee also aims to reduce duplication of AI research efforts across the government.
France Córdova, the director of the National Science Foundation, Steven Walker, the director of the Defense Advanced Research Projects Agency (DARPA) and Michael Kratsios, the deputy federal chief technology officer and head of OSTP, serve as the select committee’s co-chairs.
In September, OSTP released a request for information (RFI) to get public input on the AI strategy. Parker said the White House plans on soon releasing those responses.
“The overwhelming majority of the types of feedback that we received were positive,” Parker said. “We don’t really need to overhaul the plan, but we need to just emphasize some particular areas.”
The White House in May announced the launch of the AI task force.
DARPA in September launched AI Next, a $2 billion effort to develop the “third wave” of AI solutions. Those cutting-edge advancements include “explainable AI,” which can demonstrate how it arrives at answers.
“This is one of the tough challenges of AI today, is that it can give answers, but we don’t necessarily understand those answers,” Parker said.
AI Next will also focus on projects like high-performance AI, which can learn without the massive quantities of data most systems require.
“The facial recognition techniques that are very successful today require millions and millions of examples and training,” Parker said. “So it’s a cutting-edge research challenge right now to create AI that can learn in other ways.”
But even best-case scenarios, current-generation AI struggles to compete with the learning skills of young children.
“She’s reading a storybook at home, and she sees a picture of a fish, and you say ‘There’s a fish,’” Parker said. “And then next weekend, you go on a field trip to the aquarium, and without any prompting, she sees a fish. So she’s somehow able to translate a single cartoon of a fish in a storybook to a real fish in an aquarium.”
The AI task force also sees implications for AI to help agencies defend the perimeter of their networks from cyber attacks.
“If you’re having your cyber system attacked by some cyber hacker, you need to be able to respond very quickly,” Parker said. “You need to be able to respond more quickly than a human can respond.”
Just as well, the task force looks to shed light on “adversarial AI,” or ways adversaries can manipulate AI systems to deliberately produce misleading results.
“There are ways to poison data and to trick AI systems into thinking they’re seeing what they’re not really seeing,” Paker said. “You have one maybe one kind of animal, and you add what looks to you like white noise, and with high confidence, the AI system says it’s a different animal. Or you have a turtle, and with high confidence, it says it’s a rifle.”
In that regard, Parker said artificial intelligence can be a double-edged sword.
“It’s a two-directional research challenge … adversarial AI, how well you plug the holes that are inherent to how the technology works, and then how you can use AI to your advantage to thwart cyber hacks,” Parker said.