Analyses of Diverse Agricultural Worker Data with Explainable Artificial Intelligence: XAI based on SHAP, LIME, and LightGBM
##plugins.themes.bootstrap3.article.main##
We use recent explainable artificial intelligence (XAI) based on SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Light Gradient Boosting Machine (LightGBM) to analyze diverse physical agricultural (agri-) worker datasets. We have developed various promising body-sensing systems to enhance agri-technical advancement, training and worker development, and security. However, existing methods and systems are not sufficient for in-depth analysis of human motion. Thus, we have also developed wearable sensing systems (WS) that can capture real-time three-axis acceleration and angular velocity data related to agri-worker motion by analyzing human dynamics and statistics in different agri-fields, meadows, and gardens. After investigating the obtained time-series data using a novel program written in Python, we discuss our findings and recommendations with real agri-workers and managers. In this study, we use XAI and visualization to analyze diverse data of experienced and inexperienced agri-workers to develop an applied method for agri-directors to train agri-workers.
References
-
Fujii Y, Nanseki T, Kobayashi H, Nishitani K. Information management and cultivation of employee capabilities at large-scale paddy field farms: A case study of raising rice seedlings. Agricultural Information Research. 2012; 21(3): 51–64.
Google Scholar
1
-
Sharma S, Jagyasi B, Raval J, Patil P. AgriAcT: Agricultural Activity Training using multimedia and wearable sensing. Proceedings of 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). 2015; 439–444.
Google Scholar
2
-
Patil PA, Jagyasi BG, Raval J, Warke N, Vaidya PP. Design and development of wearable sensor textile for precision agriculture. Proceedings of IEEE 7th International Conference on Communication Systems and Networks (COMSNETS). 2015; 1–6.
Google Scholar
3
-
Kawakura S, Shibasaki R. Supporting systems for agricultural workers's skill and security. Proceedings of Asian Association on Remote Sensing|ACRS|AARS 2013. 2013; 71–77.
Google Scholar
4
-
Bao L. Physical Activity Recognition from Acceleration Data under Semi-Naturalistic Conditions. Master thesis at Massachusetts Institute of Technology, Boston. unpublished officially. 2003.
Google Scholar
5
-
Karim F, Karim F. Monitoring system using web of things in precision agriculture. Procedia Computer Science. 2017; 110: 402–409.
Google Scholar
6
-
Pandey A, Tiwary P, Kumar S, Das SK. A hybrid classifier approach to multivariate sensor data for climate smart agriculture cyber-physical systems. Proceedings of the 20th International Conference on Distributed Computing and Networking. 2019; 337–341.
Google Scholar
7
-
Wang CH, Liu CY, Pan PN, Pan HR. Research into the E-learning model of agriculture technology companies: Analysis by deep learning. Agronomy. 2019; 9(2), 83: 1–16.
Google Scholar
8
-
Vigoroso LF, Caffaro C, Micheletti M, Cavallo E. Innovating Occupational Safety Training: A Scoping Review on Digital Games and Possible Applications in Agriculture. International Journal of Environmental Research and Public Health. 2021; 18(4): 18–68.
Google Scholar
9
-
Nnaji C, Okpala I, Awolusi I. Wearable sensing devices: Potential impact & current use for incident prevention. Professional Safety. 2020; 65(4): 16–24.
Google Scholar
10
-
Hariharan S, Rejimol Robinson, RR, Prasad RR, Thomas C, Balakrishnan N. XAI for intrusion detection system: comparing explanations based on global and local scope. Journal of Computer Virology and Hacking Techniques. 2022; 1–23.
Google Scholar
11
-
Maksymiuk S, Gosiewska A, Biecek, P. Landscape of R packages for eXplainable Artificial Intelligence. arXiv preprint arXiv:2009.13248. 2020.
Google Scholar
12
-
Kłosok M, Chlebus M. Towards Better Understanding of Complex Machine Learning Models Using Explainable Artificial Intelligence (XAI): Case of Credit Scoring Modelling. University of Warsaw, Faculty of Economic Sciences. 2020.
Google Scholar
13
-
Górski Ł, Ramakrishna S. Explainable artificial intelligence, lawyer's perspective. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. 2021; 60–68.
Google Scholar
14
-
Agarwal N, Das S. Interpretable machine learning tools: a survey. Proceedings of 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 2020; 1528–1534.
Google Scholar
15
-
Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006. 2020; 11371.
Google Scholar
16
-
Vollert S, Atzmueller M, Theissler A. Interpretable Machine Learning: A brief survey from the predictive maintenance perspective. Proceedings of 26th IEEE international conference on emerging technologies and factory automation (ETFA). 2021; 1–8.
Google Scholar
17
-
Islam SR, Eberle W, Ghafoor SK, Ahmed M. Explainable artificial intelligence approaches: A survey. arXiv preprint arXiv:2101.09429. 2021.
Google Scholar
18
-
Terra A, Inam R, Baskaran S, Batista P, Burdick I, Fersman E. Explainability methods for identifying root-cause of sla violation prediction in 5G network. Proceedings of GLOBECOM 2020-2020 IEEE Global Communications Conference. 2020; 1–7.
Google Scholar
19
-
Kumar A, Dikshit S, Albuquerque VHC. Explainable artificial intelligence for sarcasm detection in dialogues. Wireless Communications and Mobile Computing. 2021; 1–7.
Google Scholar
20
-
Gosiewska A, Biecek P. Do not trust additive explanations. arXiv preprint arXiv:1903.11420. 2019.
Google Scholar
21
-
Dindorf C, Konradi J, Wolf C, Taetz B, Bleser G, Huthwelker J, Fröhlich M. Classification and automated interpretation of spinal posture data using a pathology-independent classifier and explainable artificial intelligence (Xai). Sensors. 2021; 21(18): 23–63.
Google Scholar
22
-
Galhotra S, Pradhan R, Salimi B. Explaining black-box algorithms using probabilistic contrastive counterfactuals. Proceedings of the 2021 International Conference on Management of Data. 2021; 577–590.
Google Scholar
23
-
Bücker M, Szepannek G, Gosiewska A, Biecek P. Transparency, auditability, and explainability of machine learning models in credit scoring. Journal of the Operational Research Society. 2022; 73(1): 70–90.
Google Scholar
24
-
Goodwin NL, Nilsson SR, Choong JJ, Golden SA. Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience. Current Opinion in Neurobiology. 2022; 73: 102544.
Google Scholar
25
-
Ferreira LA, Guimarães FG, Silva R. Applying genetic programming to improve interpretability in machine learning models. Proceedings of 2020 IEEE congress on evolutionary computation (CEC). 2020; 1–8.
Google Scholar
26
-
Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable ai: A review of machine learning interpretability methods. Entropy. 2020; 23(1), 18: 1–45.
Google Scholar
27
-
Li XH, Cao CC, ShiY, Bai W, Gao H, Qiu L, Chen L. A survey of data-driven and knowledge-aware explainable ai. IEEE Transactions on Knowledge and Data Engineering. 2020; 34(1): 29–49.
Google Scholar
28
-
Ning Y, Ong MEH, Chakraborty B, Goldstein BA, Ting DSW, Vaughan R, Liu N. Shapley variable importance cloud for interpretable machine learning. Patterns. 2022; 3(4): 100452.
Google Scholar
29
-
McDermid JA, Jia Y, Porter Z, Habli I. Artificial intelligence explainability: the technical and ethical dimensions. Philosophical Transactions of the Royal Society A. 2021; 379(2207): 20200363.
Google Scholar
30
-
Saeed W, Omlin C. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. arXiv preprint arXiv:2111.06420. 2021.
Google Scholar
31
-
Duval A. Explainable artificial intelligence (XAI). MA4K9 Scholarly Report, Mathematics Institute, The University of Warwick. 2019; 1–53.
Google Scholar
32
-
Lai V, Chen C, Liao QV, Smith-Renner A, Tan C. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471. 2021.
Google Scholar
33
-
Taylor JET, Taylor GW. Artificial cognition: How experimental psychology can help generate explainable artificial intelligence. Psychonomic Bulletin & Review. 2021; 28(2): 454–475.
Google Scholar
34
-
Symeonaki E, Arvanitis KG, Loukatos D, Piromalis D. Enabling IoT Wireless Technologies in Sustainable Livestock Farming Toward Agriculture 4.0. In IoT-based Intelligent Modelling for Environmental and Ecological Engineering. Springer, Cham. 2021; 213–232.
Google Scholar
35
-
Anagnostis A, Benos L, Tsaopoulos D, Tagarakis A, Tsolakis N, Bochtis D. Human Activity Recognition through Recurrent Neural Networks for Human–Robot Interaction in Agriculture. Applied Sciences. 2021; 11(5): 2188–2207.
Google Scholar
36
Most read articles by the same author(s)
-
Shinji Kawakura,
Ryosuke Shibasaki,
Deep Learning-Based Self-Driving Car: JetBot with NVIDIA AI Board to Deliver Items at Agricultural Workplace with Object-Finding and Avoidance Functions , European Journal of Agriculture and Food Sciences: Vol. 2 No. 3 (2020) -
Yuki Funami,
Shinji Kawakura,
Kotaro Tadano,
Development of a Robotic Arm for Automated Harvesting of Asparagus , European Journal of Agriculture and Food Sciences: Vol. 2 No. 1 (2020) -
Shinji Kawakura,
Ryosuke Shibasaki,
Agricultural Training System with Gazing-Point Detection Function using Head-Mounted Display: HTC Vive Pro Eye and Virtual Reality–based Unity System , European Journal of Agriculture and Food Sciences: Vol. 3 No. 3 (2021) -
Shinji Kawakura,
Ryosuke Shibasaki,
Blockchain Corda-based IoT-Oriented Information-Sharing System for Agricultural Worker Physical Movement Data with Multiple Sensor Unit , European Journal of Agriculture and Food Sciences: Vol. 1 No. 2 (2019) -
Shinji Kawakura,
Ryosuke Shibasaki,
Quantum Neural Network-Based Deep Learning System to Classify Physical Timeline Acceleration Data of Agricultural Workers , European Journal of Agriculture and Food Sciences: Vol. 2 No. 6 (2020)