Fine Tuning LLM

Fine-tuning large language models (LLMs) has become an indispensable tool in the LLM requirements of enterprises to enhance their operational processes. While the foundational training of LLMs offers a broad understanding of language, the fine-tuning process molds these models into specialized tools capable of understanding niche topics and delivering more precise results. By training LLMs for specific tasks, industries, or … Read more


Embeddings are a fundamental concept in machine learning and natural language processing (NLP). They are used to convert non-numeric data, such as text or categorical variables, into numerical vectors that machine learning algorithms can process. These vectors, known as embeddings, capture the semantic meaning and relationships between different pieces of data, enabling models to learn patterns and make accurate predictions. … Read more

LangChain Cheatsheet

LangChain simplifies building AI applications using large language models (LLMs) by providing an intuitive interface for connecting to state-of-the-art models like GPT-4 and optimizing them for custom applications. It supports chains combining multiple models and modular prompt engineering for more impactful interactions. Key Features Code Snippets 1. Creating a Custom Tool 2. Creating a Custom Chain 3. Using Memory Additional … Read more

Ollama Cheatsheet

Here is a comprehensive Ollama cheat sheet containing most often used commands and explanations: Installation and Setup Running Ollama Model Library and Management Advanced Usage Integration with Visual Studio Code AI Developer Scripts Additional Resources Other Tools and Integrations Community and Support Documentation and Updates Additional Tips Additional References Additional Tools and Resources Additional Tips and Tricks Additional Resources Additional … Read more

Autonomous AI Agents

Autonomous AI agents are intelligent computer programs that operate independently, making decisions and taking actions without human intervention. These agents are powered by advanced machine learning algorithms and large language models (LLMs), enabling them to process vast amounts of data and perform complex tasks with remarkable accuracy and speed. In this article, we will delve into the world of autonomous … Read more

What is AGI – Artificial General Intelligence?

Artificial General Intelligence (AGI): A Comprehensive Overview for Professionals Artificial General Intelligence (AGI) is a concept that has garnered significant attention in recent years, particularly with the emergence of advanced AI tools like ChatGPT. As a researcher in the field, it is essential to understand the nuances of AGI and its potential implications on various industries. In this essay, we … Read more

Microsoft DP-100 – Designing and Implementing a Data Science Solution on Azure – free questions.

Microsoft Certified Azure Data Scientist Associate, the DP-100 exam measures your ability to accomplish technical tasks like: Example Questions You need to resolve the local machine learning pipeline performance issue. What should you do? A. Increase Graphic Processing Units (GPUs).B. Increase the learning rate.C. Increase the training iterations,D. Increase Central Processing Units (CPUs). Check answerCorrect Answer: A You need to … Read more

Trading with Python Intro – Data Import

Traditionally, there have been two general ways of analyzing market data: In recent years, computer science and mathematics revolutionized trading, it has become dominated by computers helping to analyze vast amounts of available data.  Algorithms are responsible for making trading decisions faster than any human being could. Machine learning and data mining techniques are growing in popularity, all that falls … Read more

Data Scientist Interview Questions – Explain what precision and recall are?

After the predictive model has been finished, the most important question is: How good is it? Does it predict well? Evaluating the model is one of the most important tasks in the data science project,  it indicates how good predictions are. Very often for classification problems we look at metrics called precision and recall, to define them in detail let’s quickly … Read more

How would you validate-test a predictive model?

Why evaluate/test model at all? Evaluating the performance of a model is one of the most important stages in predictive modeling, it indicates how successful model has been for the dataset. It enables to tune parameters and in the end test the tuned model against a fresh cut of data. Below we will look at few most common validation metrics used … Read more