Menu Menu
[gtranslate]

OpenAI’s ChatGPT-4 is helping the visually impaired

The ubiquitous generative AI chatbot is already answering daily queries for visually impaired people with more detail and efficiency than human volunteers.

While the recreational uses of AI continue to make headlines every week, the technology’s ability to make day-to-day living more accessible is going under the radar.

Largely regarded as the benchmark for quality within tech’s latest race, OpenAI’s ChatGPT-4 is now able to accurately break down visual prompts as well as text.

This upgrade instantly drew commercial interest from companies focused on assisted living, such as Be My Eyes. Since 2012, this business has helped those with visual impairments navigate their daily routines by connecting callers with volunteers through video chat.

In the majority of cases, the caller will ask a staff member to describe something rudimentary in front of them or explain a process that isn’t accessible through other means.

It’s a simple yet creative way of making life more convenient, described by its CEO Mike Buckley as a ‘lovely merger of technology and human kindness.’ Could AI help to refine the process even further in the near future?

A training version of Be My Eyes trialled by a small pool of users is testing an integration with ChatGPT-4, and the early results are extremely positive.

Several participants have praised the level of detail and speed, and expressed a newfound/ rediscovered sense of independence provided by problem solving without the assistance of another person.

A company survey prior to the chatbot merger revealed that many visually impaired people either wouldn’t want to use up volunteer resources, feel discomfort at the idea of calling a stranger about something personal, or merely don’t like the idea of constant reliance on others.

ChatGPT-4 not only alleviates these issues, but improves the level of responses analytically. If a picture was sent of the inside of someone’s fridge, for example, not only would it list the image’s contents but it would also provide recipes using the available ingredients.

Another potential advantage is that human opinion is eliminated with AI. A photo taken from within a clothes store, for instance, would return with precise terminologies and true colours instead of subjective views and descriptions that would likely change between volunteers.

@lucyedwards AD I’m blind, can AI tell me where to go on the London tube? @bemyeyesapp #ChatGPT #OpenAI #AI #ArtificialIntelligence #HugeIfTrue ♬ original sound – British blind girl 👁🦮👩‍🦯

One influential user of the new Be My Eyes, Lucy Edwards, recently uploaded a touching TikTok showing her experience with the app. Stood in a London Underground station, she sends a photo of a tube map to ChatGPT-4 and tentatively asks how to get to her destination.

To Edwards’ amazement, an automated voice quickly returns with the best live route to Tottenham Court Road. ‘I have my autonomy back,’ she says, clearly overcome by emotion.

‘The more AI learns, the more autonomous I’m going to be as a blind person,’ she says. ‘I thought I gave that up 10 years ago. I thought I would never get that back.’

The ChatGPT-4 version will reportedly be free once it rolls out publicly, but in the meantime, By My Eyes is adamant that its volunteer community will continue as normal. Tests are ongoing to tune the AI to minimise the potential for ‘hallucinations’ and ensure its safe use.

Accessibility