‹ back home

AI-assisted computer interfaces of the future

2023-03-23 #interfaces #sci-fi #tng

I’ve been rewatching Star Trek TNG lately, and a question that often comes to mind is:

If they have all these computerised sensors, and they have a computer capable of converting information into voice, why do they need technicians constantly interacting with each terminal to read information out to the team and to the captain?

This got me into thinking how we’d design these terminals today, assuming that we had such a ship full of sensors and that kind of computing power. What would technicians/specialists be doing?

The first thing that comes mind is how AI assistants work today. They can spit out a lot of suggestions and ideas, but many (most?) are (still?) plainly wrong or useless, so their output need to be reviewed by a human. Specifically, by a specialist in the field. Even for highly sophisticated AIs and computers, I suspect this would remain the case.

So a possible user-interface for such scenarios is a split screen interface: the left side surfacing what the computer AI thinks it’s found, e.g.: readings that stand out, sources of energy, possible life forms, etc. The right half has a view into raw data, where the operator can cross reference readings manually to confirm the AI’s finding.

This kind of fits into the workflow seen on this sci-fi show (and others too). The operator gets an indicator that the computer found something, but needs to interact for a bit and then confirms the computers findings before reporting these to their team.

Such an interface would also need specialists/technicians working these terminals. It fits the workflow of this sci-fi show, but also seems to fit where this kind of technology is headed. Running a survey where the data read by sensors is too many for a human to analyse entirely would work quite well this this approach: an AI points out all the oddities or potential findings of interest, but the specialist does the final review/audit before reporting the the result to their team.

I think this is an interesting user-interface model to consider for the future. Where we can offload data that’s too much for us to an AI, and only look at the interesting points it finds. The AI points out what it’s found and the operator checks “does this make any sense, or is this just a false positive?”. You’d want false positive, because the alternative is false negatives, and you don’t want to miss out on interesting readings.

Have comments or want to discuss this topic?
Send an email to my public inbox: ~whynothugo/public-inbox@lists.sr.ht.
Or feel free to reply privately by email: hugo@whynothugo.nl.

— § —