Just when I started to wonder about what to blog about today, I came across this absolute doozy by Beth Moulam. Beth, like me, uses AAC, but she has obviously had quite a few issues over people doubting the authenticity of what she types into her communication aid: she describes how people have sometimes assumed that what she is ‘saying’ is AI generated. I can certainly see how that could be an issue, although to be fair I have never really encountered it, probably because I have always typed what I want to say word for word. However, Beth writes “I’m battling thoughts around if a speaking child said the same words an AAC user outputs, would we accept it as typical development and assume their competence? Or because someone is using AAC with an AI interface, are we holding them to a new standard, different to their speaking peers? Are we forgetting to presume competence and assuming it is the AI speaking, and clearly cannot be the person?” People are obviously starting to raise questions over whether a young person using a communication aid is actually choosing the words they use, or whether the device is doing it for them; a very ominous, concerning turn of events.
With the rise of Artificial Intelligence I can see this becoming more and more of an issue, which Moulam begins to explore really well. As a writer too, AI is starting to seem like a growing threat. Frankly, the last thing I need is people starting to question the authenticity of what I’m saying or writing. Historically, people have always doubted the intelligence of people who can’t communicate ‘normally’, and the issue Moulam lays out here may well compound that.