5 Comments
User's avatar
Nick's avatar

I saw your post on LinkedIn, but didn't get a chance to connect there. I'm a mathematical physiciat publishing a new PDE theory of thermodynamics. If possible, I'd like to send you my idea for a physics-informed AI project.

The PDE generates 1/4 wavelength sine curves as Hamiltonian solutions. I don't know much about activation functions and back propogation, but it occurred to me that with just two experimental points on the monotonic solution curve (with an additional, defined left boundary at zero), an AI could solve the critical point (of each Hamiltonian contour) which is the right side, Neumann boundary (i.e. dy/dx = 0) with gradient descent (which I understand) but also activation and back propogation which I do not understand.

ebrownargenta@gmail.com

On LinkedIn search: Erik Brown Argenta

AI Stories's avatar

Sounds interesting, but it doesn't sound like you need to know about activations and back propagation in this case. An AI is simply an overparameterized / overconstrained model that you fit data to. Activation functions add non-linearity to the model to increase the space of functions that you can fit to, and the universal approximation theorem tells us this space could contain any bounded continuous function, if we make the model large enough.

I will send you an email and we continue discussion there :)

Jughead's avatar

You described the concept in very simplistic way. Keep posting about basics of ML and DL in this manner. Looking forward to learning more from you.

AI Stories's avatar

Thanks so much for your kind words and for reading me!

Dr Teodora Szasz's avatar

I post every day on data science, AI/ML and career skills that will make you achieve your next levels in your career:

https://teodoracoach.substack.com/