Over the years, Google trained computer systems to keep copyrighted content and pornography off its YouTube service. But after seeing ads from Coca-Cola, Procter & Gamble and Walmart appear next to racist, anti-semitic or terrorist videos, its engineers realised their computer models had a blind spot: They did not understand context.
Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.
Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances