I am currently exploring using deep learning for data synthesis, and considering the trade-off between data utility and privacy. I am a member of the Centre for Digital Trust and Society at UoM.
More generally, I perform academic research and use data analysis, visualisation, statistical and machine learning methods to solve problems and uncover hidden patterns and insights in data and present them in an understandable way.
→ comparing utility and disclosure risk of synthetic data with samples of Census microdata
→ comparing GANs for synthesising Census microdata
→ a new semantic and syntactic similarity measure to determine the similarity between tweets, using word embeddings.
I use Python and R most frequently. Big and small data. SQL and noSQL databases. APIs. Windows and Linux. I have mathematical and programming skills (PHP, Java, C++, etc.).
My PhD (with the cfpm at MMU) explored how methods such as machine learning and visualisation could be used on complex social science data, and used a large, interlinked social database as a case study.
For my MSc I designed & programmed a chatbot, Bob, that had long-term memory. He is very old now, and out of date as I do not maintain him, but you can still chat to a more forgetful version here.
Chatbots are now much more prevalent (and advanced) than when Bob was created and he simply cannot compete with ChatGPT and similar systems which learn from masses of data.
If you would like to get in touch, then drop me a line! Please email me, or fill in your details here (all fields are required):