Introduction

This project was done under the course Embedded Systems and Design ( COP315)  Project under the guidance of Prof M. Balakrishnan and Mentor Mr. Vikas Upadhaya by a team of 2 sophomores and 1 final year student: Lakshya Singhal (COP315), Karan Mittal (COP315), Sai Eshvar Vipparthy (BTP). The final prototype of this project was on display at 15th Open House IIT Delhi.

Visually impaired persons encounter serious difficulties in conducting an independent life, which are inherent to the nature of their impairment. In particular, obstacle avoidance, orientation and navigation in an unknown environment seems impossible for a blind person. The project given to us is a perfect solution which will help them to navigate in an unknown environment with help of a volunteer and an android application.

We developed an application and a camera integrated device to help blind people to perform basic tasks such as navigation with the help of a volunteer. The stream of the camera will be shown in Volunteer’s android device. The camera integrated device is a light portable device which can be easily mounted on the spectacles of the blind person. The app is integrated with the voice call feature, guiding VI user through voice commands. It also includes a GPS feature which helps the volunteer to access the current location of the blind person. In case when VI User is not connected to a Volunteer, the app is integrated with the OCR feature which will help VI user to recognize and read the text in front of him.

Approach

Hardware

For this project we used

  • Raspberry Pi zero W board
  • Raspberry Pi Camera
  • Battery
  • Power Converter
  • Designed and fabricated a lightweight casing for the Raspberry Pi zero and camera which could be mounted over user’s spectacles. The eyepiece casing also had a switch to turn on/off the Raspberry Pi.

The following iterations were 3D printed for the eyepiece casing

               

   

Software

The following services have been implemented in the android application

  1.   For VI User side :
  • Request volunteer
  • OCR functions
  • Device control
  • Contacts Manager
  1.   For Volunteer side :
  • Accept/ Reject communications
  • Location of VI User on a map
  • NSS volunteering

Following features were implemented on the Raspberry Pi Zero W

  •  Stream the live feed from the pi camera and host it on a local server (Motion Library)
  • Face Detection (OpenCV library) and OCR (using Tesseract)

Flow Diagram

Untitled Diagram

Open House Demonstration:

We had an interesting set of visitors, ranging from a curious school students to professors. We received some interesting feedback. Some people were so enthusiastic about the project that they literally asked us when we will be launching this device as a product.

IMG_20190420_125453

4dd427d0-00a7-4edb-a540-d45704ca6cd4

15e2319f-3a3d-4388-b978-3a3fdafa8282

Feedback and Improvement:

  • Get rid of Jiofi Wi-Fi device (Connecting the Raspberry Pi with phone’s hotspot can be a solution)
  • Global streaming
  • Weight reduction (Customized PCB can be solution)
  • Integrate other OCR features fast (Object recognition is left)
  • Single press calling, without needing smartphone
Link to GitHub repository:
https://github.com/SaiEsvarVipparthy/RemoteVisualAssistant.git
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s