We use recent explainable artificial intelligence (XAI) based on SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Light Gradient Boosting Machine (LightGBM) to analyze diverse physical agricultural (agri-) worker datasets. We have developed various promising body-sensing systems to enhance agri-technical advancement, training and worker development, and security. However, existing methods and systems are not sufficient for in-depth analysis of human motion. Thus, we have also developed wearable sensing systems (WS) that can capture real-time three-axis acceleration and angular velocity data related to agri-worker motion by analyzing human dynamics and statistics in different agri-fields, meadows, and gardens. After investigating the obtained time-series data using a novel program written in Python, we discuss our findings and recommendations with real agri-workers and managers. In this study, we use XAI and visualization to analyze diverse data of experienced and inexperienced agri-workers to develop an applied method for agri-directors to train agri-workers.
Fujii Y, Nanseki T, Kobayashi H, Nishitani K. Information management and cultivation of employee capabilities at large-scale paddy field farms: A case study of raising rice seedlings. Agricultural Information Research. 2012; 21(3): 51–64.
Sharma S, Jagyasi B, Raval J, Patil P. AgriAcT: Agricultural Activity Training using multimedia and wearable sensing. Proceedings of 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). 2015; 439–444.
Patil PA, Jagyasi BG, Raval J, Warke N, Vaidya PP. Design and development of wearable sensor textile for precision agriculture. Proceedings of IEEE 7th International Conference on Communication Systems and Networks (COMSNETS). 2015; 1–6.
Kawakura S, Shibasaki R. Supporting systems for agricultural workers's skill and security. Proceedings of Asian Association on Remote Sensing｜ACRS｜AARS 2013. 2013; 71–77.
Bao L. Physical Activity Recognition from Acceleration Data under Semi-Naturalistic Conditions. Master thesis at Massachusetts Institute of Technology, Boston. unpublished officially. 2003.
Karim F, Karim F. Monitoring system using web of things in precision agriculture. Procedia Computer Science. 2017; 110: 402–409.
Pandey A, Tiwary P, Kumar S, Das SK. A hybrid classifier approach to multivariate sensor data for climate smart agriculture cyber-physical systems. Proceedings of the 20th International Conference on Distributed Computing and Networking. 2019; 337–341.
Wang CH, Liu CY, Pan PN, Pan HR. Research into the E-learning model of agriculture technology companies: Analysis by deep learning. Agronomy. 2019; 9(2), 83: 1–16.
Vigoroso LF, Caffaro C, Micheletti M, Cavallo E. Innovating Occupational Safety Training: A Scoping Review on Digital Games and Possible Applications in Agriculture. International Journal of Environmental Research and Public Health. 2021; 18(4): 18–68.
Nnaji C, Okpala I, Awolusi I. Wearable sensing devices: Potential impact & current use for incident prevention. Professional Safety. 2020; 65(4): 16–24.
Hariharan S, Rejimol Robinson, RR, Prasad RR, Thomas C, Balakrishnan N. XAI for intrusion detection system: comparing explanations based on global and local scope. Journal of Computer Virology and Hacking Techniques. 2022; 1–23.
Maksymiuk S, Gosiewska A, Biecek, P. Landscape of R packages for eXplainable Artificial Intelligence. arXiv preprint arXiv:2009.13248. 2020.
Kłosok M, Chlebus M. Towards Better Understanding of Complex Machine Learning Models Using Explainable Artificial Intelligence (XAI): Case of Credit Scoring Modelling. University of Warsaw, Faculty of Economic Sciences. 2020.
Górski Ł, Ramakrishna S. Explainable artificial intelligence, lawyer's perspective. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. 2021; 60–68.
Agarwal N, Das S. Interpretable machine learning tools: a survey. Proceedings of 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 2020; 1528–1534.
Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006. 2020; 11371.
Vollert S, Atzmueller M, Theissler A. Interpretable Machine Learning: A brief survey from the predictive maintenance perspective. Proceedings of 26th IEEE international conference on emerging technologies and factory automation (ETFA). 2021; 1–8.
Islam SR, Eberle W, Ghafoor SK, Ahmed M. Explainable artificial intelligence approaches: A survey. arXiv preprint arXiv:2101.09429. 2021.
Terra A, Inam R, Baskaran S, Batista P, Burdick I, Fersman E. Explainability methods for identifying root-cause of sla violation prediction in 5G network. Proceedings of GLOBECOM 2020-2020 IEEE Global Communications Conference. 2020; 1–7.
Kumar A, Dikshit S, Albuquerque VHC. Explainable artificial intelligence for sarcasm detection in dialogues. Wireless Communications and Mobile Computing. 2021; 1–7.
Gosiewska A, Biecek P. Do not trust additive explanations. arXiv preprint arXiv:1903.11420. 2019.
Dindorf C, Konradi J, Wolf C, Taetz B, Bleser G, Huthwelker J, Fröhlich M. Classification and automated interpretation of spinal posture data using a pathology-independent classifier and explainable artificial intelligence (Xai). Sensors. 2021; 21(18): 23–63.
Galhotra S, Pradhan R, Salimi B. Explaining black-box algorithms using probabilistic contrastive counterfactuals. Proceedings of the 2021 International Conference on Management of Data. 2021; 577–590.
Bücker M, Szepannek G, Gosiewska A, Biecek P. Transparency, auditability, and explainability of machine learning models in credit scoring. Journal of the Operational Research Society. 2022; 73(1): 70–90.
Goodwin NL, Nilsson SR, Choong JJ, Golden SA. Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience. Current Opinion in Neurobiology. 2022; 73: 102544.
Ferreira LA, Guimarães FG, Silva R. Applying genetic programming to improve interpretability in machine learning models. Proceedings of 2020 IEEE congress on evolutionary computation (CEC). 2020; 1–8.
Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable ai: A review of machine learning interpretability methods. Entropy. 2020; 23(1), 18: 1–45.
Li XH, Cao CC, ShiY, Bai W, Gao H, Qiu L, Chen L. A survey of data-driven and knowledge-aware explainable ai. IEEE Transactions on Knowledge and Data Engineering. 2020; 34(1): 29–49.
Ning Y, Ong MEH, Chakraborty B, Goldstein BA, Ting DSW, Vaughan R, Liu N. Shapley variable importance cloud for interpretable machine learning. Patterns. 2022; 3(4): 100452.
McDermid JA, Jia Y, Porter Z, Habli I. Artificial intelligence explainability: the technical and ethical dimensions. Philosophical Transactions of the Royal Society A. 2021; 379(2207): 20200363.
Saeed W, Omlin C. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. arXiv preprint arXiv:2111.06420. 2021.
Duval A. Explainable artificial intelligence (XAI). MA4K9 Scholarly Report, Mathematics Institute, The University of Warwick. 2019; 1–53.
Lai V, Chen C, Liao QV, Smith-Renner A, Tan C. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471. 2021.
Taylor JET, Taylor GW. Artificial cognition: How experimental psychology can help generate explainable artificial intelligence. Psychonomic Bulletin & Review. 2021; 28(2): 454–475.
Symeonaki E, Arvanitis KG, Loukatos D, Piromalis D. Enabling IoT Wireless Technologies in Sustainable Livestock Farming Toward Agriculture 4.0. In IoT-based Intelligent Modelling for Environmental and Ecological Engineering. Springer, Cham. 2021; 213–232.
Anagnostis A, Benos L, Tsaopoulos D, Tagarakis A, Tsolakis N, Bochtis D. Human Activity Recognition through Recurrent Neural Networks for Human–Robot Interaction in Agriculture. Applied Sciences. 2021; 11(5): 2188–2207.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.