DESIGN AND CONSTRUCTION OF A RASPBERRY PI-BASED HUMAN FOLLOWING ROBOT WITH TENSORFLOW LITE-BASED DETECTION
Main Article Content
Abstract
The application of artificial intelligence (AI) in robotics has provided innovative solutions to various operational challenges, particularly those related to the system's ability to automatically detect and interact with humans. One such implementation is the Human Following Robot, an autonomous robot designed to follow human movements using an AI-based image processing system. This research proposes the design and development of a Raspberry Pi 4-based Human Following Robot prototype with a TensorFlow Lite algorithm. The research methodology uses an experimental approach with a focus on evaluating target detection accuracy and robot movement stability under varying lighting and terrain conditions. The experimental approach was used by testing the robot on flat and rocky terrain, as well as in bright and dim lighting conditions. The limitation of this research is that the system only tests the detection of one target person without any obstacles or other people around the robot. The results of the study show that the robot has optimal performance at a detection distance of 200–300 cm with an accuracy of 92%–94% in lighting conditions above 100,000 lux, as well as stable movement on flat terrain. Based on these results, this robot has great potential to support more efficient and safer military logistics operations. Future development will focus on improving accuracy in low-light conditions, improving mechanical design, and integrating multi-object tracking capabilities.
Abstrak
Penerapan kecerdasan buatan (Artificial Intelligence/AI) dalam bidang robotika telah menghadirkan solusi inovatif terhadap berbagai tantangan operasional, khususnya terkait kemampuan sistem dalam mendeteksi dan berinteraksi dengan manusia secara otomatis. Salah satu implementasinya adalah Human Following Robot, yaitu robot otonom yang dirancang untuk mengikuti pergerakan manusia dengan memanfaatkan sistem pengolahan citra berbasis AI. Penelitian ini mengusulkan perancangan dan pembangunan prototipe Human Following Robot berbasis Raspberry Pi 4 dengan algoritma TensorFlow Lite. Metodologi penelitian menggunakan pendekatan eksperimental dengan fokus penelitian diarahkan pada evaluasi akurasi deteksi target serta kestabilan pergerakan robot pada kondisi pencahayaan dan medan yang bervariasi. Pendekatan eksperimental digunakan dengan menguji robot pada medan datar dan berbatu, serta pada kondisi pencahayaan terang dan redup. Batasan penelitian ini adalah sistem hanya menguji pendeteksian satu orang target tanpa adanya objek penghalang atau orang lain di sekitar robot. Hasil penelitian menunjukkan bahwa robot memiliki kinerja optimal pada jarak deteksi 200–300 cm dengan akurasi 92%–94% pada kondisi pencahayaan di atas 100.000 lux, serta pergerakan stabil di medan datar. Berdasarkan hasil tersebut, Robot ini memiliki potensi besar untuk mendukung operasi logistik militer yang lebih efisien dan aman. Untuk pengembangan selanjutnya diarahkan pada peningkatan akurasi di kondisi minim cahaya, perbaikan desain mekanik, serta integrasi kemampuan pelacakan multi-objek
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This work is licensed under a TESLA: Jurnal Teknik Elektro Creative Commons Attribution-ShareAlike 4.0 International License.
References
[1] McKinsey & Company, The State of AI in 2022 — and a Half Decade in Review, McKinsey Global Institute, 2022. [Online]. Available: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022
[2] M. Kress, The Art and Science of Sustaining Military Operations: Operational Logistics. Cham: Springer, 2016. https://doi.org/10.1007/978-3-319-31346-8
[3] R. Hall and M. Coyne, Military Logistics and Supply Chain Management: Challenges and Solutions. Hershey: IGI Global, 2014. https://doi.org/10.4018/978-1-4666-4761-9
[4] R. Algabri and M. T. Choi, "Deep-learning-based indoor human following of mobile robot using color feature," Sensors, vol. 20, no. 9, pp. 1–15, 2020. https://doi.org/10.3390/s20092458
[5] R. G. Wardhana, G. Wang, and F. Sibuea, "Penerapan machine learning dalam prediksi tingkat kasus penyakit di Indonesia," Journal of Information System Management (JOISM), vol. 5, no. 1, pp. 45–53, 2023.
[6] A. Allan, "Benchmarking TensorFlow Lite on the new Raspberry Pi 4 Model B," Medium, 2019. [Online]. Available: https://aallan.medium.com/benchmarking-tensorflow-lite-on-the-new-raspberry-pi-4-model-b-3fd859d05b98
[7] J. Burdack, F. Horst, S. Giesselbach, I. Hassan, S. Daffner, and W. I. Schöllhorn, "Systematic comparison of the influence of different data preprocessing methods on the performance of gait classifications using machine learning," Frontiers in Sports and Active Living, vol. 1, no. 11, pp. 1–12, 2019. https://doi.org/10.3389/fspor.2019.00011
[8] K. Bahadur Kharka, T. R. Wangchuk Bhutia, L. Chettri, N. Luitel, and S. Lepcha, "Human Following Robot using Arduino Uno," International Research Journal of Modernization in Engineering Technology and Science, vol. 1, no. 6, pp. 321–327, 2019.
[9] J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement," arXiv preprint, arXiv:1804.02767, 2018.
[10] A. G. Howard et al., "MobileNets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint, arXiv:1704.04861, 2017.
[11] Z. Zhang, R. Qiu, and M. Q.-H. Meng, "Human-following robot using depth camera," International Journal of Advanced Robotic Systems, vol. 16, no. 1, pp. 1–12, 2019. https://doi.org/10.1177/1729881419828198
[12] J. Pan and Y. Zhang, "Real-time human detection and tracking for mobile robot with depth camera," IEEE Access, vol. 8, pp. 87932–87940, 2020. https://doi.org/10.1109/ACCESS.2020.2993515
[13] H. Sadeghi and J. Choi, "Real-time object detection on edge devices: A survey," Sensors, vol. 21, no. 15, pp. 5024, 2021. https://doi.org/10.3390/s21155024
[14] Tzutalin, LabelImg: Graphical Image Annotation Tool. GitHub Repository, 2015. [Online]. Available: https://github.com/tzutalin/labelImg
[15] T. N. Sainath, Y. He, B. Li, A. Narayanan, and R. Pang, "Efficient neural network models for mobile and embedded devices," in Proc. Interspeech, 2020, pp. 2272–2276. https://doi.org/10.21437/Interspeech.2020-2794