Our Team
-
David J Hoxie, Phd
Key interest in applying machine learning to nanophotonics and teaching, communicating and outreaching proper ethical, reliable and repeatable research for simulation, artificial intelligence, and machine learning based scientific discovery,
The Great Distractor
-
eV
Specializes in work and study distraction via targeted application of tail and paws to various forms of digital and physical media.
Mission Statements
-
Teaching
Machine learning has become forefront in much of today’s research efforts. Many researchers are seeking means to uncover how machine learning works. Yet the foundations of machine learning are built upon centuries of bio-physical and chemical models. The founders of AI produced the methods to try and model human learning, with a physical basis. Therefore the statement ’we don’t know how machine learning works’ is empirically wrong. This also means much of the research effort spent on trying to figure out how the algorithms behave is not being allocated efficiently to help push science forward.
My future ambitions lie in researching various optical sensing methodologies for different types of materials, by means of employing machine learning methods to aid in discovering more materials required for more efficient nanoscale bio-molecular sensors, light emitting diodes, optical artificial neural networks, just to name a few. While simultaneously seeking to understand how, by fundamental physical laws, the human brain recreates a physical representation of the world. The more I study the intersection of machine learning and nanophotonic sensing, this connection seems to grow closer. Eventually my ambitions may even lead me to the path of studying neuro-biology.
My approach however has been novel compared to other related work. Specifically the differentiation lies in the necessity of opening the ‘black box’ of machine learning. This has proved to be a highly resilient path for work on material identification, via optical signals, as evident in our publication in the Royal Society Chemistry, Nanoscale, which demonstrates that understanding how machine learning works, is vital to finding better more efficient devices. During my PhD I found that the current standards of teaching machine learning as a ’black box’ is hindering future research. Researchers should know that the machine learning they are using, is exactly the same methods of calculations they have learned in their course work. This would open up avenues for future researchers to perform testable research and hypothesis driven machine learning, much like my work in nanophotonic; and more ensure research funding is properly allocated.
I hope to one day be able to illuminate the connection between machine learning and methods used in statistical physics, microbiology, and chemistry. These methods from the physical sciences are the direct underlying theory behind many of the classic machine learning papers. I believe it is vital to highlight this to STEM students rather than to continue with the idea of “treat it as a black box, we don’t need to know how it works.” If students are aware of the foundations of machine learning it may help to mitigate algorithms which may have hallucinated data, or incomplete computational models. Instead, their effort can be directed to better identify false data generated by AI and more
-
Research
Description goes here
Mission Statement:
Machine learning has become forefront in much of today’s research efforts. Many researchers are seeking means to uncover how machine learning works. Yet the foundations of machine learning are built upon centuries of bio-physical and chemical models. The founders of AI produced the methods to try and model human learning, with a physical basis. Therefore the statement ’we don’t know how machine learning works’ is empirically wrong. This also means much of the research effort spent on trying to figure out how the algorithms behave is not being allocated efficiently to help push science forward.
My future ambitions lie in researching various optical sensing methodologies for different types of materials, by means of employing machine learning methods to aid in discovering more materials required for more efficient nanoscale bio-molecular sensors, light emitting diodes, optical artificial neural networks, just to name a few. While simultaneously seeking to understand how, by fundamental physical laws, the human brain recreates a physical representation of the world. The more I study the intersection of machine learning and nanophotonic sensing, this connection seems to grow closer. Eventually my ambitions may even lead me to the path of studying neuro-biology.
My approach however has been novel compared to other related work. Specifically the differentiation lies in the necessity of opening the ‘black box’ of machine learning. This has proved to be a highly resilient path for work on material identification, via optical signals, as evident in our publication in the Royal Society Chemistry, Nanoscale, which demonstrates that understanding how machine learning works, is vital to finding better more efficient devices. During my PhD I found that the current standards of teaching machine learning as a ’black box’ is hindering future research. Researchers should know that the machine learning they are using, is exactly the same methods of calculations they have learned in their course work. This would open up avenues for future researchers to perform testable research and hypothesis driven machine learning, much like my work in nanophotonic; and more ensure research funding is properly allocated.
I hope to one day be able to illuminate the connection between machine learning and methods used in statistical physics, microbiology, and chemistry. These methods from the physical sciences are the direct underlying theory behind many of the classic machine learning papers. I believe it is vital to highlight this to STEM students rather than to continue with the idea of “treat it as a black box, we don’t need to know how it works.” If students are aware of the foundations of machine learning it may help to mitigate algorithms which may have hallucinated data, or incomplete computational models. Instead, their effort can be directed to better identify false data generated by AI and more.