Researchers at the Massachusetts Institute of Technology (MIT) have been working in an artificial intelligence algorithm called Norman - named after the main character in Alfred Hitchcock's 1960 thriller 'Psycho' - and trained it on image captions which appeared on a Reddit community known to share graphic depictions of death.
As the project notes reveal, it was then presented with psychological tests using Rorschach inkblots - and where a standard AI saw "a black and white photo of a baseball glove", what Norman saw was "man is murdered by machine gun in broad daylight".
The researchers have noted that this behaviour could be down to the "biased data" it was fed, rather than as a direct result of the nature of AI algorithms.
They explained: "The data used to teach a machine-learning algorithm can significantly influence its behaviour.
"So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data that was fed to it ... [Norman] represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine-learning algorithms."