Contextual info from sensors -> GPT's spatial capabilities -> HCI?

Ken Hinckley stated in 2000 that sensors could enhance interactions on mobile devices. I am very interested in using inexpensive sensors to provide GPT with more spatial capabilities, enabling more interactions with GPT instead of chatbot, which might also benefit groups such as the visually impaired. The fundamental logic here is to shift the focus away from cameras and the latency issues associated with processing large volumes of video data, to consider how other types of sensor-provided information, like the contextual data offered by the technology behind AirTags, could empower interactions.

Hinckley, K., Pierce, J., Sinclair, M., & Horvitz, E. (2000). Sensing techniques for mobile interaction. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (pp. 91-100).