Developing a Machine Learning Model for High-Accuracy Classification of Celestial Objects
Achieved an 80% accuracy rate in classifying celestial objects as Stars, Galaxies, or Quasi-Stellar Objects using a machine learning model that analyzes solar data. It was done through Data Collection, Data Preprocessing, Model Optimization, Iterative Improvement and Documentation.I Gathered a comprehensive dataset of solar data, including various features and attributes. I then Cleaned the dataset by handling missing values and outliers. Further Chosing appropriate machine learning algorithms for classification tasks and then Fine-tune hyperparameters of the model using techniques like grid search or random search to improve its performance. Lastly, Documented the entire process, including data sources, preprocessing steps, model architecture, and results, to facilitate reproducibility and collaboration.
About expert
A Junior Machine Learning Engineer.
Dyimah applied their knowledge of Tech knowledge, Data analysis and Business analysis in AI and Data on the following markets: The USA.
Dyimah built products like ML Celestial Objects identifyer using the variety of tools such as tool Pytorch, Tensorflow and Scikit-learn.
Other cases by Dyimah
Development of a High-Accuracy Medical Diagnosis Model Using Machine Learning
In collaboration with a fellow Data Scientist, we developed a powerful machine learning model. This model proved to be highly effective in analyzing medical data and accurately predicting whether a patient had a medical condition, achieving a remarkable accuracy rate of over 90%. Using Machine Learning and TensorFlow we facilitated the model's development. Data Analysis and Data Visualization techniques enabled us to gain valuable insights from the medical data, while Python played a pivotal role in implementing and fine-tuning the model, resulting in its impressive predictive accuracy.
Advanced NLP Techniques for Effective Classification of Disaster Tweets
With extensive team discussions I employed three distinct methodologies, namely the Vader sentiment analysis, Bag of Words, and a Hugging Face BERT language model, for the purpose of tweet classification into two categories: "Disaster Tweets" and "Non-Disaster Tweets.". Using BERT-based language models, natural language processing (NLP), data analysis, TensorFlow, and Python programming, i made an in-depth evaluation to determine the optimal approach for the given task.
Similar cases
Setting up efficient customer support
Organized pre-launch product UX testing, regular users research and new features backlog prioritization
Organized the company's participation in high-profile contests, resulting in Nimb becoming a winner of Smart City Expo worldwide Congress’s call for solution in the “Safe Cities” category and a finalist of Xprize Women’s Safety competition
Set up efficient client support
Enhancing Analytical Infrastructure and Reducing Fraudulent Activities
Led the shift of analytical infrastructure, enhancing data resources and decreasing update errors by 90%. Increased identifiable traffic sources from 60% to 95%. Played a crucial role in reducing fraudulent responses from 28% to 1-4% by strengthening moderation and creating a fraud analytics direction. Contributed to a project reducing the cost of applications from groups by 70% and reducing vacancies in central Moscow from 20% to 6%.