Zettelkasten135 ELU (Exponential Linear Unit) Information The ELU (Exponential Linear Unit) is a type of activation function commonly used in artificial neural networks. It is similar to the ReLU activation function, but with some modifications to handle negative inputs better. The ELU function is defined as f(x) = x for x ≥ 0 and f(x) = α(e^x - 1) for x < 0, where α is a hyperparameter that controls the output value for negative inputs. Th.. Zettelkasten/Terminology Information 2023. 2. 27. MAPE (Mean Absolute Percentage Error) Information MAPE (Mean Absolute Percentage Error) is a measure of the accuracy of a forecasting model, expressed as a percentage of the actual values. MAPE is defined as the average of the absolute percentage errors (APEs) over the forecast horizon, multiplied by 100% to express it as a percentage: MAPE = (1/n) * sum(|(actual - forecast)/actual|) * 100% APE measures the size of the errors in the.. Zettelkasten/Terminology Information 2023. 2. 26. Gated spatial-temporal graph neural network based short-term load forecasting for wide-area multiple buses Authors Nantian Huang, Shengyuan Wang, Rijun Wang, Guowei Cai, Yang Liu, Qianbin Dai Title Gated spatial-temporal graph neural network based short-term load forecasting for wide-area multiple buses Publication International Journal of Electrical Power & Energy Systems Volume 145 Issue x Pages x Year 2023 DOI https://doi.org/10.1016/j.ijepes.2022.108651 Introduction Background Modern power system.. Zettelkasten/Paper Summarization 2023. 2. 26. PReLU (Parametric Rectified Linear Unit) Information PReLU (Parametric Rectified Linear Unit) is a variation of the ReLU activation function used in neural networks. It is called "parametric" because it has a learnable parameter that can be adjusted during the training process, unlike the standard ReLU function. The PReLU function is defined as f(x) = alpha * x for x = 0, where alpha is a learnable parameter. Th.. Zettelkasten/Terminology Information 2023. 2. 26. CVRMSE (Coefficient of Variation of the Root Mean Squared Error) Information CVRMSE (Coefficient of Variation of the Root Mean Square Error) is a measure of the variation of the errors in a regression model, normalized by the mean of the target variable. CVRMSE is defined as the ratio of the root mean square error (RMSE) to the mean of the target variable, multiplied by 100% to express it as a percentage: CVRMSE = (RMSE / mean(target)) * 100% RMSE measures th.. Zettelkasten/Terminology Information 2023. 2. 25. Monthly electric load forecasting using transfer learning for smart cities Authors Seungmin Jung, Sungwoo Park, Seungwon Jung, Eenjun Hwang Title Monthly electric load forecasting using transfer learning for smart cities Publication Sustainability Volume 12 Issue 16 Pages x Year 2020 DOI https://doi.org/10.3390/su12166364 Introduction Background With the recent increase in the use of fossil fuels to cope with the explosive demand for energy, diverse global problems, su.. Zettelkasten/Paper Summarization 2023. 2. 25. LReLU (Leaky Rectified Linear Unit) Information LReLU (Leaky Rectified Linear Unit) is a type of activation function used in deep learning models, particularly in convolutional neural networks (CNNs). It is similar to the ReLU (Rectified Linear Unit) activation function, but it allows for a small, non-zero gradient when the input is negative. The LReLU function is defined as f(x) = max(ax, x), where a is a small constant that is u.. Zettelkasten/Terminology Information 2023. 2. 25. SELU (Scaled Exponential Linear Unit) Information SELU (Scaled Exponential Linear Unit) is an activation function for neural networks that was introduced in 2017 by Klambauer et al. SELU is a self-normalizing activation function, which means that it preserves the mean and variance of the activations across the layers, and thus reduces the vanishing/exploding gradients problem. SELU is defined as a piecewise function that is similar .. Zettelkasten/Terminology Information 2023. 2. 24. A robust support vector regression model for electric load forecasting Authors Jian Luo, Tao Hong, Zheming Gao, Shu-Cherng Fang Title A robust support vector regression model for electric load forecasting Publication International Journal of Forecasting Volume x Issue x Pages x Year 2022 DOI https://doi.org/10.1016/j.ijforecast.2022.04.001 Introduction Background Load forecasts are widely used in the power industry to operate and plan power systems, such as unit co.. Zettelkasten/Paper Summarization 2023. 2. 24. Gradient vanishing Information Gradient vanishing refers to a problem that occurs during the training of deep neural networks where the gradients used to update the model's parameters become extremely small as they propagate through the layers of the network. This happens because gradients are calculated using the chain rule of differentiation, and the chain rule involves multiplying many small gradients together,.. Zettelkasten/Terminology Information 2023. 2. 24. An effective dimensionality reduction approach for short-term load forecasting Authors Yang Yang, Zijin Wang, Yuchao Gao, Jinran Wu, Shangrui Zhao, Zhe Ding Title An effective dimensionality reduction approach for short-term load forecasting Publication Electric Power Systems Research Volume 210 Issue x Pages x Year 2022 DOI https://doi.org/10.1016/j.epsr.2022.108150 Introduction Background Establishment of reliable energy management system (EMS) has become the focus given.. Zettelkasten/Paper Summarization 2023. 2. 24. Bagging (Bootstrap Aggregating) Information Bagging (Bootstrap Aggregating) is a machine learning technique that combines multiple models trained on different subsets of the training data. Bagging is often used to reduce the variance and improve the stability of the predictions. Bagging samples the training data with replacement to create multiple bootstrap samples, each of which has the same size as the original dataset. Bagg.. Zettelkasten/Terminology Information 2023. 2. 23. Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes Authors Gregory Benton, Wesley Maddox, Andrew Gordon Wilson Title Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes Publication Proceedings of the 39th International Conference on Machine Learning (ICML 2022) Volume 162 Issue x Pages 1798-1816 Year 2022 DOI x Introduction Background Gaussian processes (GP) have had significant success in time seri.. Zettelkasten/Paper Summarization 2023. 2. 23. Long-term dependency Information In machine learning and artificial neural networks, long-term dependency refers to the challenge of capturing relationships between input and output variables that are separated by a significant time gap. Long-term dependencies can be particularly important in time series forecasting and natural language processing tasks, where the input and output sequences can be very long and comp.. Zettelkasten/Terminology Information 2023. 2. 23. Boosting Information Boosting is a machine learning technique that combines multiple weak learners to create a strong learner. Weak learners are models that perform only slightly better than random guessing, such as decision trees with limited depth or simple linear models. Boosting iteratively trains a sequence of weak learners, where each subsequent model focuses on the samples that were misclassified .. Zettelkasten/Terminology Information 2023. 2. 22. Dstagnn: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting Authors Shiyong Lan, Yitong Ma, Weikang Huang, Wenwu Wang, Hongyu Yang, Pyang Li Title Dstagnn: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting Publication Proceedings of the 39th International Conference on Machine Learning (ICML 2022) Volume 162 Issue x Pages 11906-11917 Year 2022 DOI x Introduction Background With a growing number of vehicles in road networks,.. Zettelkasten/Paper Summarization 2023. 2. 22. MSE (Mean Squared Error) Information MSE (Mean Squared Error) is a commonly used metric to evaluate the performance of a machine learning model. It measures the average squared difference between the predicted values and the actual values. To calculate the MSE, you take the sum of the squared differences between the predicted and actual values, and then divide by the number of data points. The formula for MSE is: (1/n) .. Zettelkasten/Terminology Information 2023. 2. 22. DNN (Deep Neural Network) Information DNN (Deep Neural Network) is a type of artificial neural network with multiple layers between the input and output layers. DNNs are used for various machine learning tasks, such as image and speech recognition, natural language processing, and autonomous systems. DNNs use backpropagation, a supervised learning algorithm, to adjust the weights of each layer to minimize the difference .. Zettelkasten/Terminology Information 2023. 2. 21. Autoformer Information Autoformer is a variation of the Transformer architecture designed for long-term series forecasting. Autoformer uses an autoregressive model, which means that it uses previous values of the time series as inputs to predict future values. In addition to the standard self-attention mechanism in the Transformer, Autoformer introduces an auto-correlation attention mechanism, which consid.. Zettelkasten/Terminology Information 2023. 2. 21. Domain adaptation for time series forecasting via attention sharing Authors Xiaoyong Jin, Youngsuk Park, Danielle Maddix, Hao Wang, Yuyang Wang Title Domain adaptation for time series forecasting via attention sharing Publication Proceedings of the 39th International Conference on Machine Learning (ICML 2022) Volume 162 Issue x Pages 10280-10297 Year 2022 DOI x Introduction Background Time series forecasting has recently benefited from the development of DNN (De.. Zettelkasten/Paper Summarization 2023. 2. 21. 이전 1 ··· 3 4 5 6 7 다음