Glambot SNAFH-R1: Lengan Kamera 4-DOF Biaya Rendah untuk Photobooth Indoor Berbasis Pengolahan Citra
##plugins.pubIds.doi.readerDisplayName##:
https://doi.org/10.63643/jodens.v5i2.322关键词:
Glambot, Kamera Robotik, Lengan Kamera 4-DOF, Pelacakan Subjek, Pengolahan Citra摘要
This paper presents the design and implementation of SNAFH-R1, a low-cost four-degree-of-freedom (4-DOF) glambot camera arm intended for indoor photobooth applications. The system integrates a Python-based image processing module on a laptop with an Arduino Uno R3 controller that drives three high-torque RDS3225MG servos and one MG995 servo via a PCA9685 PWM driver. A USB camera (Logitech C270) is used to capture video frames that are processed in real time using grayscale conversion, illumination enhancement, noise reduction, and human detection based on a cascade classifier. The centroid of the detected region of interest is then converted into position offsets and mapped to target joint angles, which are applied to the servos using clamped and interpolated commands to produce smooth camera motion. A BH1750 light sensor and relay-controlled LED lamp are employed to automatically stabilize local illumination around the subject. Experimental observations in an indoor photobooth scenario show that SNAFH-R1 can maintain the subject’s head and upper body within the camera frame at short to medium distances, while producing reasonably smooth camera trajectories and operating stably during short test sessions. These results indicate that the proposed system is a feasible low-cost alternative for automated glambot-style camera movements in small to medium-scale events and as a teaching platform for robotics and computer vision.
参考
M. Delbracio, D. Kelly, M. S. Brown, and P. Milanfar, “Mobile computational photography: A tour,” Annual Review of Vision Science, vol. 7, pp. 571–604, 2021, doi: 10.1146/annurev-vision-093019-115521.
C. Han and M. Zappavigna, “Multimodal rhythm in TikTok videos: Orchestrating image, sound, and text,” Multimodality & Society, vol. 4, no. 1, pp. 58–79, 2024, doi: 10.1177/26349795231207228.
R. Rio, Michael, R. Charlitos, Sudiono, and S. Thang, “Analisis penggunaan Instagram sebagai media pemasaran pada perusahaan DE Plafon,” Journal of Digital Ecosystem for Natural Sustainability (JoDENS), vol. 1, no. 2, pp. 91–94, Jul. 2021, doi: 10.63643/jodens.v1i2.42.
T. M. Brown, “How a gizmo used to photograph taco ads took over the red carpet,” The New Yorker, Mar. 10, 2025. [Online]. Available: https://www.newyorker.com/magazine/2025/03/10/how-a-gizmo-used-to-photograph-taco-ads-took-over-the-red-carpet (diakses 2 Des. 2025).
Mark Roberts Motion Control, “Showbolt – the ultimate content capture experience for live events,” [Online]. Available: https://www.mrmoco.com/showbolt/ (diakses 2 Des. 2025).
M. Gschwindt, E. Camci, R. Bonatti, W. Wang, E. Kayacan, and S. Scherer, “Can a robot become a movie director? Learning artistic principles for aerial cinematography,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2019, pp. 1107–1114, doi: 10.1109/IROS40897.2019.8967592.
P. Praveena, B. Cagiltay, M. Gleicher, and B. Mutlu, “Exploring the use of collaborative robots in cinematography,” in CHI EA ’23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, Apr. 2023, pp. 127:1–127:6, doi: 10.1145/3544549.3585715.
P. V. Sarathi, V. Kumar, and R. Kumar, “Artificial intelligence in the future of mocobot camera in film making and camera control by voice command,” in Proc. 2024 International Conference on Computer and Drone Applications (IConDA), 2024, doi: 10.1109/ICCDA64887.2024.10867329.
Y. Yu et al., “Techniques and challenges of image segmentation: A review,” Electronics, vol. 12, no. 5, p. 1199, Mar. 2023, doi: 10.3390/electronics12051199.
V. M. N. Passaro, A. Cuccovillo, L. Vaiani, M. De Carlo, and C. E. Campanella, “Gyroscope technology and applications: A review in the industrial perspective,” Sensors, vol. 17, no. 10, p. 2284, 2017, doi: 10.3390/s17102284.
P. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), vol. 1, 2001, pp. I-511–I-518, doi: 10.1109/CVPR.2001.990517.
T. Chavdarova, N. Srezoski, D. Cvetkovski, and A. Tentov, “A concept and kinematic model design for an intelligent 4-DOF robotic arm,” in Proc. XI ETAI Int. Conf. on Telecommunications, Automation and Informatics, Ohrid, Macedonia, Sep. 2013.
M. H. Adzeman, M. A. S. Abu, M. H. Jopri, and M. H. Jamaluddin, “Kinematic modeling of a low cost 4 DOF robot arm system,” International Journal of Emerging Trends in Engineering Research, vol. 8, no. 10, pp. 6828–6834, Oct. 2020, doi: 10.30534/ijeter/2020/328102020.
##submission.downloads##
已出版
##submission.howToCite##
期
栏目
##submission.license##
##submission.copyrightStatement##
##submission.license.cc.by4.footer##









