Interpretable Ensemble-Machine-Learning models for predicting creep behavior of concrete

نویسندگان

چکیده

This study aims to provide an efficient and accurate machine learning (ML) approach for predicting the creep behavior of concrete. Three ensemble (EML) models are selected in this study: Random Forest (RF), Extreme Gradient Boosting Machine (XGBoost) Light (LGBM). Firstly, data Northwestern University (NU) database is preprocessed by a prebuilt XGBoost model then split into training set testing set. Then, Bayesian Optimization 5-fold cross validation, 3 EML tuned achieve high accuracy (R2 = 0.953, 0.947 0.946 LGBM, RF, respectively). In set, show significantly higher than equation proposed fib Model Code 2010 0.377). Finally, SHapley Additive exPlanations (SHAP), based on cooperative game theories, calculated interpretate predictions model. Five most influential parameters concrete compliance identified SHAP values as follows: time since loading, compressive strength, age when loads applied, relative humidity during test temperature test. The patterns captured three consistent with theoretical understanding factors that influence creep, which proves reasonable predictions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Making machine learning models interpretable

Data of different levels of complexity and of ever growing diversity of characteristics are the raw materials that machine learning practitioners try to model using their wide palette of methods and tools. The obtained models are meant to be a synthetic representation of the available, observed data that captures some of their intrinsic regularities or patterns. Therefore, the use of machine le...

متن کامل

Research directions in interpretable machine learning models

The theoretical novelty of many machine learning methods leading to high performing algorithms has been substantial. However, the black-box nature of much of this body of work has meant that the models are difficult to interpret, with the consequence that the significant developments in machine learning theory are not matched by their practical impact. This tutorial stresses the need for interp...

متن کامل

Interpretable Machine Learning Models for the Digital Clock Drawing Test

The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular neuropsychological screening tool for cognitive conditions. The Digital Clock Drawing Test (dCDT) uses novel software to analyze data from a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making possible the analysis of both the drawing process and final product. We developed ...

متن کامل

A NOTE TO INTERPRETABLE FUZZY MODELS AND THEIR LEARNING

In this paper we turn the attention to a well developed theory of fuzzy/lin-guis-tic models that are interpretable and, moreover, can be learned from the data.We present four different situations demonstrating both interpretability as well as learning abilities of these models.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Cement & Concrete Composites

سال: 2022

ISSN: ['0958-9465', '1873-393X']

DOI: https://doi.org/10.1016/j.cemconcomp.2021.104295