As I was looking at the amount of times platforms died on the web and I began thinking about the slow death of AI enthusiasm and what that will do to the Blind community.
It really is a bizarre feeling when you’re the only skeptic of a thing within your own community. My first post about AI has gained some attention, as well as the follow up post about this topic. AI is taking the blind community by storm. Be my Eyes has added it into their product to describe pictures, Let’s not mention the fact the particular large language model, LLM called Chat GPT they chose, was never the right kind of machine learning for the task of describing images. A different kind of machine learning would have been better. Blind podcasters are praising LLMs and saying they’re more accurate than human descriptions, and, well, blind voiceover artists are more than willing to give places like ElevenLabs their voices so they can, well, I don’t even know yet. I guess attempt to make audiobooks.
I’m of two minds about this whole thing. While the stuff LLMs is giving us is incorrect information, it’s still information that the sighted world won’t or refuses to give us. While I absolutely hate the hype and even AI nonsense in general, and don’t use any LLM on any of my content, blind and visually impaired people can become audio book narrators if their Braille skills aren’t that great with ElevenLabs.
Even though I’ll never hire a blind narrator that uses ElevenLabs to generate an audio book, am I practicing discrimination by doing this? Someone will say yes. I don’t know what will come of this wave in LLMs and dependance on AI, but I predict that once the hype dies down, well, blind and even legally blind people are probably going to be advocating for more accessibility measures but in a different way.
AI accessibility will have its own challenges. In fact, we’re already witnessing instances of AI developers forgetting disabled people exist so I fully predict that blind people will be advocating to make actual LLM platforms accessible. While that’s a fight that won’t happen for a while, I also predict that the actual text output of some of these generators will be inaccessible, prompting another push to make these interfaces usable by everyone. I also predict web accessibility will actually get worse, not better, as coding models will spit out inaccessible code that developers won’t check or won’t even care to check. But I’m the only one within the community that’s unenthusiastic about the benefits of AI within our community.
I’m old enough to remember when OCR became a huge hit to play video games, scan inaccessible documents, and otherwise. While I also use OCR for speed and efficiency, or just even to get halfway there, I still use a human to read stuff because, even today, OCR isn’t where I thought it was going to be. Same for self-driving cars. Now that AI is a thing now, I doubt OCR and even self-driving cars will get any significant advancements.
About usage, well, that’s what blind people are using LLMs for at this very moment. They’re using it to describe characters from TV shows and movies in great detail. they’re using it to describe music videos, but to the blind and visually impaired people that use these tools, they aren’t so much caring about the accuracy of the information. It’s information they’ve never had previously. Accuracy is an afterthought. The only thing that matters is having information that they never had previously. Then again, these are the very same blind and visually impaired people that say that the Social model of Disability is woke PC nonsense, so it’s no surprise that the community as a whole would jump on the LLM hype. The blind and visually impaired people advocating for this have been conditioned to believe that technology will solve all accessibility problems because, simply put, humans won’t do it. Humans won’t care. Humans are inefficient squishy things that live in a completely different, subjective, world. Blind and visually impaired people don’t want to wade through a subjective landscape. Objectivity matters to our community, no matter the cost of accuracy.
Another reason the Blind community is enthusiastic about AI is simply because, to other blind people, it makes them feel like less of a burden on society and vastly more independent. With an LLM, it will never get annoyed, aggravated, think less of the person, or similar. Humans have been conditioned to think we are useless because we are blind so any help we ask for is viewed as a job or a chore rather than a chance to make someone’s life easier.
Also, most blind people don’t have a sighted person around because sighted people never willingly talk to a blind person just because. An LLM will always be there, well, until the servers go down, but this isn’t even a concern yet within the community and I don’t think it will be a thought until an AI server goes down the same way bionic eye servers shut down.
Even though I don’t use AI or LLMs and even though I do have in person and remote friends I can get assistance from without feeling as if I’m wasting their life, I’m also thinking about how our community has just replaced being dependent on humans with being dependent on tech and technology. I wonder, though, what will be the next technology thing our community clings to because humans fail us again, and again, and again, and again. Humans still continuously actually say no to accessibility when designing websites, so it’s also no wonder why some blind and visually impaired people are championing AI accessibility toolbars like AccessiBe. The web is inaccessible, and, with every refusal of our basic access needs, it’s no wonder the community has given up on humans and dove headfirst into putting faith in another algorithm.
My stance is very unique within the community. Have I used these tools to describe a picture when no human was around? Of course. I’ve used it to describe memes now that the Say my Meme podcast appears to have stopped updating. I’ve used it to get a starting point on pictures. It’s the same with OCR. Even though I’ve used these tools, I just don’t think they are even worth half the hype. In fact, even today, there are incidences happening where AI is starting to look like Web3 hype nonsense. The Facebook thing got rid of their responsible AI team, search engines are useless because AI junk is flooding results now, small search engines are becoming very popular, indicating people are tired of this new wave of content. OpenAI can’t decide if it wants to fire people or bring them back because of ethics over growth, and Tech billionaires keep strategizing to make even more money off their own hype.
There are many more examples of AI going very wrong and basically even making people very angry that big tech is stealing their labor, but I’ll leave you with the best podcast to debunk all the AI hype and nonsense. Well, okay, two podcasts. Tech Won’t Save Us, which is basically a podcast that detests tech and tech culture in general, and my personal favorite, Mystery AI Hype Theater 3000, a podcast that debunks all the AI hype.
Meanwhile, I’ll be reading personal blogs and the small web because the indieweb is cozy and because personal websites won’t die as often nor as quickly as the rest of the web. Tootles!