Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An SVM is purely a linear model from the right perspective, and if you're being really reductive, RELU neural networks are piecewise linear. I think this may be obscuring more than it helps; picking the right transformation for your particular case is a highly nontrivial problem; why sin(x) and x^2, rather than, say, tanh(x) and x^(1/2).


ReLU networks have the nice property of being piecewise linear, but also during training they optimise their own non-linear transformation over time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: