A recent survey of AI researchers all over the world showed that more than a third of them were concerned that AI could ultimately lead to a “global disaster” on par with nuclear war. The AI Index Report, released by the Stanford Institute for Human-Centered Artificial Intelligence, shows researchers are pretty concerned about what could happen with this tech if it isn’t reigned in by proper regulations. “These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new,” the report says. “However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.”