Application of Regression Analysis for Classification
Application of Regression Analysis for Classification
Application of Regression Analysis for Classification
The regression problem is predicting value for some function which is also called approximation or approximation function. Regression can be classified into two types depending upon the association between dependent & independent variables. This is called linear regression &nonlinear regression functions.
A. “Linear Regression Problem”
We take approximate value from given one or more explanatory variables to some dependent variable yi using linear regression function. The following figure.1 Xi act as a prediction supervisor and Xi is equivalent to y shows an equivalent value of y.
The best-fitting curve is the RED line which has the least measured complete perpendicular distances from each data point. Every new value of x on that line is presumed to have its equivalent y value. The model applies equally to a linear function depending upon more than one variable. For each vector Xi the corresponding value will be yi. Consider the error variable is ε then,
The regression problem is not linear in nature.
B. Non-Linear Regression Problem
The nonlinear regression problem is very difficult since any linear combination of model parameters can not determine the dependent variable. There is some non-linear grouping of one or more independent variables involved. The approximation is based on the previous results in nonlinear regression which is similar to linear regression but we cannot take all input parameters into account because the nearest data value obtainable from the supervisor data set has the greatest effect on the estimate of its value. So to maintain the nonlinearity we are in need of some methods. While it is difficult to guarantee the total preservation of non-linearity, a system is as effective as it can maintain a model’s non-linearity. Unless yi ‘is the actual output of any xi input but then we get where yi is the line point that corresponds to the parameters of the model then the error is.
Error= yi-yi’ =Distance(yi,yi’)
Where greedy approaches are employed for approximation purposes for non-linear regression problems. While greedy methods are sufficient, here we focus only on local optimization which is not suitable at times. For the same reason, iterative approaches are often used but the drawback is time-consumption and introduction delay.
Nonlinear Regression Application
The nonlinear regression problem will be applied to determine the performance level of the proposed dimensionality reduction process,. Consider ‘n’ inputs and they are fixed samples
XX={x1, x2, x3, and … xn} in RD, followed by Y={y1, y2, y3, … Yn}. Yn. Our aim is to evaluate functional dependence. An expected output ytest=f(x test) is measured efficiently, where x test is the input to the test stage. Here, we find the reduction of dimensionality as a way of dealing with the difficulties of high dimensional inputs space of regression tasks. In preprocessing step the input space dimensionality will be reduced and unusual information of target function is protected. A suitable reduction in dimensionality results in a limited space, the target function can still be predicted on the basis of the features in the subspace. This will be executed using the regression function. This method would be mathematically accurate since it runs in the sub-space of the low dimensions. High accuracy is achieved by combining the regression approach and dimensionality reduction. The smallest subspace is obtained by minimizing the dimensionality by running the MIDR, which still maintains full knowledge about the target values. A single nominal function P of degree L is used for approximating the function g.
g(x) =P(Ax)=P(W)
P(W)=
The above calculation is done by fitting coefficients in the L2 norm:
P’= argmin|| y-P(W)||2
It can be easily solved because the minimization problem is linear and it’s a function of polynomial coefficients. Cross-validation specifies the values of the expected essential length d and the multinominal degree l. Out of using methods extra than polynomial regression the regression in ensuring linear subspace will naturally be carried out. Other effective techniques like GPR or SVM can be added. Computationally they are more challenging. In this efficiency vs complication trade-off, we picked the poly monomial regression since it is a well-performing method that can be measured easily. Particularly it does not require any subset from the training set to be stored for computations of the test level. In this case, the MIDR can be used as a generalization of the Development of Projections. In addition, minimize the residual variance in the Projection Pursuit Regression (PPR), whereas the reciprocal knowledge between predictive function and the target values are maximized using MIDR.The recommended reduction of the linear dimensionality based on reciprocal knowledge that is used for classification. The classification problem is distinct from regression in terms of the target value domain. For a fixed set of n inputs consider X={x1, x2, ..xn} in RD; target values are C=={c1, c2, .. In the case of regression, cn} in discrete environment context as opposed to R. Here, too, our aim is to estimate a functional dependency C = f(x) in such a way that a predicted output can be measured efficiently.
Ctest = f(xtest) at test stage for an input xtest. A typical classification method is the KNN method the test input xtest is identified by determining its nearest K-neighbor from the train inputs x’ train and assigning Ctest = Ctrain (x’ train). If the inputs fit a large dimension space, however, search becomes computationally prohibitive to the closest neighbor. A suitable method to solve a reduction of dimensionality is the curse of dimensionality in the sub-space of the input prior to classification with NN. In addition, the LDA technique does exactly the same by prominent the inputs of a grouping of intra -inter covariance matrices in the subspace bridged by Eigenvectors. We need to evaluate the shared knowledge between the unremitting inputs and distinct target values to apply MIDR for the classification function. Luckily this can easily be achieved as a form of conditional entropies.
As an example of an application for linear and nonlinear regression, we will now take a two-class problem. The scatter plot in figure 2 shows the overlapping among the two-class variables. After applying linear and nonlinear regression in the class variables the scatter plot is plotted once again. Figure 3 depicts the scatter plot for the two-class variable after applying linear regression. It is observed from figure 3 that the overlapping is still persisted among the classes. Figure 4 demonstrates the efficacy of nonlinear regression in the scatter plot among the two-class variable.
Figure 2. Scatter plot for Raw Two class Variables
Figure 3. Scatter plot for two classes Variables after Applying Linear Regression
Figure 4. Scatter plot for two classes Variables after Applying Non-Linear Regression with Normalization
image source
- Blog 7 rh 1: Dr.R.Harikumar