What comes into our mind, when we hear word fear? We cannot say that we its must be a zombie or a ghost. As we grow up, meaning of the fear changes. As a child our fear was what? Fear of ghost, fear…


独家优惠奖金 100% 高达 1 BTC + 180 免费旋转

Linear regression

So What does this actually means?

The importance of Algorithms vs Data

but unfortunately getting large amount of data is difficult and also expensive, So basically we have to use different algorithms for solving the ML problem

Now, Coming back to point we have largely 4 types of ML system

We are looking for now the Supervised ML

In the supervised ML we have two types of methods from which we can classify the problem in different categories

So in the Regression type of problems we are predicting a target numeric value while in Classification we are grouping data with respective to their kind.

One the famous algorithm we can talk about is Linear Regression Model which is used for the data which is showing straight line type behavior in features vs target graph

linear regression model is based on the equation y=mx+c where m is slope and c is intercept/bias.

when we look into the data which is having single independent variable and from that we would like to predict the value by training the model with respective to dependent variable .

on the above example we can see a straight red line which is following the straight line equation.

But, How we can find the slope and intercept for this line?

for this question we need to find best fit line and by following that line we can predict our end output and which can be achieve by minimize the error or residuals.

when the error is minimum means the line is best fit with respect to the data points.

now there are many methods from which we can achieve this minimum error

the mathematical equation for finding the residual is as follow

By tweaking the slope and intercept parameters we can find the best fit line

and for finding this residual/cost/error function we have different method

like MSE, RMSE, Normal equation, Gradient descent and many others

And for knowing whether we have achieve the minimum error and our line is best fit we have R-Squared statistics.

where RSS is nothing but residual sum squared which we are trying to make minimum by using the performance optimization technique(like gradient descent)

we will be looking optimization and regularization technique in another blog Thanks

Add a comment

Related posts:

How to Network in the Social Distancing World

2020 is proving to be an interesting year for sales professionals. All events to network and meet with customers and prospects have been canceled for the foreseeable future. Now what? Events that…

Top Interior Design Trends for 2018

With the new year just around the corner, now is a great time to start thinking about updating spaces to keep up with interior design trends for 2018. Straight from the latest design fairs and…

Introducing AloeStackView for iOS

Over the past few years, our mobile development efforts have increased at a dramatic rate. In the past year alone, we’ve added more than 260 screens to our iOS app. At the same time, more and more…