*Could there be a technology* ๐ฉ๐ป *that can assist in recording data* ๐งพ *at each step and making that data clearly available to all?* ๐

Today, each transaction ๐ฒ performed is recorded in a book ๐, ledger, spreadsheet, or database. However, this transaction is private and centralized. Blockchain offers a growing list of records ๐ at each step of the transaction linked ๐ among themselves as nodes and is decentralized and open to all ๐ . Blockchain technology combines public transparency with privacy protection to create a single data platform.

๐ฉ Let us see the definition of Blockchain,

Blockchain is a distributed ๐, decentralized peer-to-peer ledger (record of transactions) that is duplicated across numerous nodes (computers) ๐ป in a network, allowing data about any event or transaction ๐ to be recorded in real-time ๐๐ . It is a chain ๐(nodes) made up of blocks ๐ฅ that are used to record digital assets ๐ข with the help of a secure algorithm ๐ฉ๐ป. By design, blockchains are immutable and immune to the modification of data.

This network ๐ of computers ๐ป manages the Blockchain without the use of a hierarchy ๐บ. Blockchain networks are commonly referred to as peer-to-peer networks because of their flat architecture.

These computers ๐ป check each transaction ๐ individually before adding them to a 'block' ๐ฅ of data. These blocks are then uploaded to the Blockchain and downloaded to each machine ๐ป.

Like a page ๐ in a record book ๐, each block stores a number of transactions. Each block is connected to one before and after it.

Hashes sequences of numbers ๐ข and characters are used to record these transactions. Each transaction's hash is created using the previous transactions hash. This generates a chain effect ๐, making it impossible to modify the order of hashes. As a result, once a transaction has been added, it cannot be changed.

**What distinguishes a Blockchain ledger from a database? Isn't it the same thing when it comes to capturing, organizing, monitoring, and regulating data? ๐ค**

- Hashes of the transactions ๐ are transparent to all the nodes in the network ๐ .
- Blocks have specific storage ๐พ capacities and, when filled, are closed and linked ๐ to the previously filled block ๐ช.
- A central authority, a firm, or a government organization ๐ข owns databases. On the other hand, Blockchain is a peer-to-peer network, where one node may communicate with any other node ๐ฅ .

๐ฉ Let us familiarize ourselves with important terms before digging deep into the Blockchain.

Payments ๐ต, supply chain details, health records ๐งพ, real estate contracts ๐ฌ, and other transactions are all recorded in a book(digitally) ๐ป.

SHA-256 is a cryptographic algorithm ๐ฉ๐ป that accepts any input length and gives an output of 256-bit or 64-character hash. SHA-256 hashing algorithm assures that the original information cannot be deduced from the hash output, making it very safe ๐คฉ.

A hash is a shortened digital signature of the data. Hashes are sent over a public network ๐. Each block ๐จ uses a hash function to refer to the previous block ๐.

Mining on the Blockchain is a means of verifying each transaction ๐ stage. Blockchain miners ๐ฉ๐ญ are the individuals ๐จ๐ญ involved in this process. Before adding new a new block, all miners need to solve complex mathematical problems ๐ข to calculate hashes. The miner receives a reward ๐ฐ as a cryptocurrency(bitcoin) and the transactions are added to the Blockchain once the miner's solution has been confirmed .

Any computer ๐ป that is part of a peer-to-peer network and maintains a copy of the Blockchain is referred to as a node in a blockchain.

Merkle tree, also known as "hash binary tree," is a data structure that is used for storing transactions ๐ on a blockchain efficiently.

Decentralization in the Blockchain is the transfer of control of any activity to a distributed network ๐, unlike a centralized organization that has full control over your activities.

In the form of a distributed ledger, each network member owns a copy of the same data ๐.

Companies ๐ข frequently share information with their partners ๐ค. Each time data is modified, the possibility of data loss or inaccurate data entering the workstream increases ๐คฏ. Thanks to decentralized data storage ๐ค , every entity has access to a real-time ๐๐ , shared view of data.

Decentralization can help mitigate sources of vulnerability in systems where there might be an excess of dependence on explicit workers ๐จ๐ญ.

Decentralization may also aid in resource distribution optimization, ensuring that promised services are delivered with improved performance ๐ฏ and consistency .

The purpose of Blockchain is to make digital information ๐พ accessible by allowing it to be recorded and shared but not altered.

Blockchain technology is a combination of three cutting-edge technologies ๐คฉ :

- Cryptographic Algorithms ๐ฉ๐ป
- A distributed ledger peer-to-peer network ๐
- A mechanism for keeping network transactions and records on a computer ๐ป

Each block contains some data, the hash of the block, and the hash of the previous block.

**DATA:**It depends on the type of blockchain, for instance, a bitcoin.**HASH:**You can compare a hash to a fingerprint. It identifies a block ๐ฅ and is always unique. Once a block is created, its hash is calculated ๐ข. Changing something inside the block will cause the hash to change.**HASH OF PREVIOUS BLOCK:**This hash is used while calculating the hash of the current block which effectively creates a chain ๐ of blocks.

๐ฉ Now, let's take an example. Here we have a chain๐ of 3 blocks ๐จ .

Let's assume you've tampered with the second block ๐ง, which will change the hash of the block. As a result, block three and all subsequent blocks will be invalid since they will no longer retain a valid hash of the previous block. As a result, altering a single block invalidates all subsequent blocks ๐ฎ .

To make your Blockchain legitimate again, you will have to recalculate ๐ข all the hashes of other blocks. These days, computers are extremely fast , capable of calculating hundreds of thousands of hashes per second ๐ . This indicates that hashes alone are insufficient to prevent tampering ๐ .

To counteract this, Blockchains employ a mechanism known as **Proof-Of-Work**. It's a method that increases the duration ๐ of rebuilding blocks. This approach makes tampering with the blocks extremely difficult ๐คฏ because if you tamper with one block, you'll have to recalculate the Proof-Of-Work for all subsequent blocks. As a result, a Blockchain's security is derived from its innovative hashing and the proof-of-work process ๐ฏ .

Since Blockchain is a peer-to-peer network ๐, when someone joins this network, they get the full copy of the Blockchain.

๐ฉ Now let's see what happens when someone creates a new block ๐ฉ .

Everyone on the network receives ๐ the new block ๐ฉ . Each node then verifies the block to ensure that it hasn't been tampered , and uploads the block to each node ๐ฅ. All the nodes in this network create consensus ๐ค. They agree about which blocks are valid and which aren't.

Blocks that have been tampered with will be rejected by the network's other nodes. To effectively tamper with a Blockchain, you'll need to tamper with all of the chain's blocks, redo each block's proof-of-work, and gain control of more than half of the peer-to-peer network. Then and only then will your modified block be acknowledged by the rest of the world. This is nearly difficult to accomplish! ๐

Blockchain is a very advanced technology ๐ฏ that provides efficient verification , regulation, transparency, and traceability of data ๐ . This technology can also be easily integrated with Big Data ๐ . Blockchain solutions can also help cut costs ๐ต and eventually make many services more competitive.

Author of the blog:

*Mohammad Amaan (Campus Chapter Guru - AI Probably)*

**Automated machine learning (AutoML)** automates machine learning tasks to real-world issues. AutoML encompasses the whole workflow, from a raw dataset ๐ to a deployable machine learning model ๐ค. It makes machine learning accessible, even to those with no prior experience in the discipline.

๐ฉ A conventional machine learning model has the following workflow:

The need for machine learning systems has skyrocketed in recent years. This is owing to ML's current success in a wide range of applications. Many businesses fail to adopt ML models despite that machine learning may help specific firms.

First, they must assemble a staff of seasoned data scientists ๐ฉ๐ป who can command a high wage ๐ฐ. Second, even if you have an excellent team, determining which model is suitable for your challenge typically necessitates more experience than expertise.

The popularity of machine learning in a broad spectrum of applications has resulted in an ever-increasing need for machine learning systems that can be utilized by non-experts right away . **AutoML** tends to automate the most significant number of stages in an ML pipeline with the least amount of work and without jeopardizing the model's performance ๐ฏ.

By automating repetitive processes, it boosts productivity ๐ฏ. As a result, a data scientist may concentrate on achieving the end goal more efficiently.

Automating the ML process also aids in the prevention of mistakes like misconfiguring parameters and misinterpretation of data, that may occur manually.

Finally, AutoML is a step towards globalizing ๐ machine learning by making the capability of ML available to everyone.

๐ฉ Some famous AutoML frameworks are:

Auto-sklearn is a free and open-source AutoML library based on **Scikit-learn**. Auto-sklearn relieves a machine learning user from the burden of algorithm selection and hyperparameter tuning. It determines the best-performing model ๐ฏ and the optimal collection of hyperparameters for the provided dataset ๐. It comprises approaches for feature engineering such as One-Hot encoding ๐ข, feature normalization, dimensionality reduction, etc. Sklearn estimators are used in this package to tackle classification and regression issues.

Have a look at the official documentation and paper!

MLBox is a powerful Automated Machine Learning python library. According to the official documentation of MLBox, it provides the following features:

- Fast reading and distributed data preprocessing/cleaning/formatting.
- Highly robust feature selection and leak detection.
- Accurate hyper-parameter optimization in high-dimensional space.
- State-of-the-art predictive models for classification and regression (Deep Learning, Stacking, LightGBM,).
- Prediction with models interpretation.

If you are looking for a thorough dig, here is the official documentation!

TPOT is an open-source python AutoML application that uses genetic programming to enhance machine learning pipelines. The data flow of the TPOT architecture is depicted in the graphic below. TPOT requires a cleaned dataset before feature processing, model selection, and hyperparameter optimization to deliver the best-performing model. TPOT expands the Scikit-learn framework by adding regressor and classifier techniques. It works by searching through thousands of alternative pipelines to identify the optimal one for your data.

**An example machine learning pipeline** (Source - TPOT Documentation)

For further insight, take a look at the official documentation!

Auto-Keras is an open-source library for automatic machine learning. As a deep learning framework using the Keras, it provides utilities for automatically searching suitable architecture and hyperparameters for deep learning models. The API's architecture is based on the Scikit-Learn API design, making it incredibly simple to use.

Through automated Neural Architecture Search (NAS) techniques, Auto-Keras tries to simplify the ML process. The deep learning engineer/practitioner is primarily replaced by Neural Architecture Search, which automatically optimizes the model.

Interested in learning more? You may read the official documentation here!

H2O's AutoML is a framework created by H2O for automating machine learning workflows. H2O supports the most commonly used statistical and machine learning methods, such as gradient boost machines, generalized linear models, deep learning models, and others.

H2O has an automated machine learning module that builds a pipeline using its methods. To optimize its pipelines, it conducts a thorough search using its feature engineering approaches and model hyperparameters.

H2O automates some of the most challenging data science and machine learning activities, including feature engineering, model validation, model tuning, model selection, and model deployment.

Take a peek at the official documentation!

Initially, the goal ๐ฏ of AutoML was to automate repetitive operations like **pipeline development** and hyperparameter tuning so that data scientists may focus more on the business problem at hand.

AutoML also intends to make the technology available to everyone, rather than just a select few. AutoML and data scientists can collaborate to speed ๐ the ML process, allowing the true efficacy ๐ฏ of machine learning to be realized.

AutoML's success is mainly determined by its acceptance and the improvements achieved in this field. However, it seems apparent that AutoML will play a significant role in the future of machine learning.

We hope you liked the blog and found this useful! Check out our amazing blog post on how to create a simple neural network using Tensorflow.

Also, check out our YouTube channel for more great videos and projects.

]]>Characteristics of python:-

Getting started with python:-

We have covered the basics of python. We know you want to learn more, right๐คฉ? Yep! You can learn every concept of python in detail from the python course, which is a blend of theory, practicals, and projects.

Let us understand the differences between different data structures in python (To learn about Data Structures, please check out our post here ) :-

We need to understand object-oriented programming to make the best out of python along with data structures. In fact, OOP is your friend when you want to solve complex programming challenges. Object-oriented programming allows us to pack๐ฆ data and functionality together while keeping the details hidden. As a result, coding with OOPs is flexible, modular, and abstract, making it particularly useful to create more extensive programs.

Advantages of the OOPs concepts:-

All the oops concepts in python in single code:-

```
#class - encapsulation
class Parent_class_name:
class_attri = "i belong to class" #class attribute
def __init__(self): #constructor
self.attri_1 = "initialized with declaration of obj" #object attribute
self.attri_2 = "i represent state/property of object" #object attribute
def method(self, name):
print("i belong to the object", name)
def poly_method(self, var1, var2, var3 = 0):
print(var1+var2+var3)
#child class - inheritance
class Child_class_name(Parent_class_name):
def __init__(self):
self.attri_1 = "reinitializes in Child class"
print("child attribute",self.attri_1,"parent attribute",Parent_class_name.class_attri)
#initialize object
Parent_class_obj = Parent_class_name()
Parent_class_obj.method("Object_name")
#child class object
Child_class_obj = Child_class_name()
#polymorphism
Parent_class_obj.poly_method(1,2,3)
Parent_class_obj.poly_method(1,2)
#abstraction
from abc import ABC, abstractmethod
class abstract_class(ABC):
@abstractmethod
def abs_method(self):
pass
class inherit_abs(abstract_class):
#should define abstract method in subclass
def abs_method(self):
print("In inherited class of abstract class")
#can not create object for abstract class (error)
#abstract_obj = abstract_class()
abstract_sub_obj = inherit_abs()
abstract_sub_obj.abs_method()
```

Great! Learning OOPs makes a lot of difference.

Kick-start your programming journey with **AI Probably**'s **a FREE** 3-hour crash course on **Python** (launching soon) for beginners.

This course is **beginner-friendly** and provides a **hands-on** learning experience with python๐ฏ. It introduces fundamental programming concepts like functions, loops, built-in data types, and **Object-Oriented programming**. We will also go through the fundamentals of building a program in python from a set of simple instructions- step by step๐. Not just that, we also provide two fun and unique **Python projects**๐จ๐ป.

Do not miss the opportunity๐คฉ. See you there!

*Why python?* Python is one of the trending programming languages, and it is designed to be used in a range of web frameworks and applications as shown:-

We hope you have happy learning from this article๐ค. For learning about a Python full-stack framework, check out our article on ** Django**. Watch out for more helpful content with us through social media. Keep learning๐ฅณ!!!

In this blog, we will look at some features of Django and demonstrate its **ease of use** ๐ . If you are excited as I am, let's dive in ๐ค .

Assumptions for the reader:

- You have a basic knowledge of Python ๐๐ป
- You are excited!๐๐ป

A web framework is a **server-side application framework** designed to support the development of dynamic websites ๐ป . Django is one such framework available for Python ๐ฐ . Basically, Django is a code library that makes the life of a web developer much easier ๐ฉ๐ป .

**But how? How exactly does a web framework help**

A web framework is a **collection** of class/APIs with predefined codes that you can use in your program to add a feature to your web application ๐ .

So, we have a web framework with a set of APIs . We just need to add our snippets of code wherever required and done! ๐ That solves the problem.

The Web framework for Perfectionists with Deadlines.

๐ Simply defined, Django is an **open-source Python web framework** that allows web and app developers to **create** and **deploy** big projects ๐จ๐ป rapidly, safely, and in a consistent, **"clean"** style. ๐ It comes with a set of **out-of-the-box tools** that support, enrich, and accelerate ๐จ the traditional web development process, allowing you to complete otherwise difficult tasks **quickly** and to a high standard.

It also makes working with **databases** a lot easier ๐ฐ. You don't need to work directly with a database because Django takes care of **creating** the database on the go ๐ค๐ป as the user creates models. You can **easily swap** from one SQL database to the other at the same time.๐ฏ With Django, you can have a fully functional, secure, and admin-editable website in just a few hours. ๐

Nevertheless, it's a framework worth having on your portfolio!๐คฉ

๐ก Some features Django offers us are:

You have, without a doubt! ๐๐ป A few of them are listed here:

๐ Django follows the **MVT Pattern**. What is MVT?

The MVT pattern is a type of **software design** pattern. ๐๐ป It is made up of three major components:

- ๐
**Model**- It aids in database management. Your database is defined in the backend . - ๐
**View**- This executes the business logic and interacts with the model to carry data and render a template . - ๐
**Template**- It handles the User Interface .

๐ก So, the process is pretty simple. A user sends the request via the browser ๐. Django acts as a controller , checking the URL for available resources and the associated view is called ๐๐ป. The model and template interact with the view ๐ค๐ป, which retrieves data from the database via the model , formats it, and renders with the help of the template as a response.

- ๐ ORM - Here, you can define your data models .
- ๐ Django comes with a fully-featured and secured authentication system ๐จ๐ป. It handles your user counts, permissions, etc .
- ๐ It provides a powerful and production-ready interface to manage content on your site ๐. If you want to add a user or create a group ๐ฑ, Django has an in-built admin interface created automatically .
- ๐ Django offers full support for
**translating**into different languages plus locale scale-specific formatting (e.g., dates, time, time zones) .

No worries at all! ๐ค **AI Probably** has got you covered.๐๐ป We are soon launching a **FREE** 3-hour crash course on **Python with Django**. ๐ฏ

๐ก This course will teach you Django from the **ground up** ๐, so you don't need any prior knowledge to get started, just a basic understanding of Python ๐ฐ. We start from the basics and work our way up to understanding the ๐ **Django MVT architecture**, ๐ setting up ** SQLite Database **, ๐ adding **views** and **templates** to our website, ๐ handling **requests and responses** - step by step ๐. This course has it all covered in great detail ๐ and backed up with a **complete example course project** ๐จ๐ป.

Whats more? We have **gamification-based Learning**! ๐ฎ What does that mean?

๐ That means you learn complex topics through **interactive games** ๐ฎ on our website! With our latest advancement in Gamified Learning methods, you now stand a chance to **improve** your learning engagement and see your skills grow exponentially! ๐

The Django community ๐จ๐ฉ๐ง๐ฆ is a **vibrant** group of talented programmers ๐จ๐ป, and there are numerous tools to help you get started. Also, excellent documentation provided by Django; extremely straightforward and very informative ๐.

๐ก Let us create a small project using Django .

Now, to begin, you'll need Python installed ๐ฐ. Python comes pre-loaded on OS X and Linux-based systems, so if you're using one of those, you're good to go . If you're using Windows, ๐จ๐ป you'll need to download and install Python from the Python website .

Now, all we have to do is open up a terminal ๐ป and run

```
pip install django
```

to install Django without impacting anything else. The most **recent** version of Django will be downloaded and installed . To get started with Django, we'll need to create a project. Navigate to the folder (using terminal) where you want to create your project and type

```
django-admin startproject myproject
```

๐ If you look inside this directory, you'll notice that it has created a `myproject`

folder with the following structure:

```
myproject/
manage.py
myproject/
__init__.py
settings.py
urls.py
wsgi.py
```

๐ป This is the default project structure for a Django project. Django comes with a **built-in server** that you may use to test your app ๐ป. It makes it very easy to get your projects up and running ๐. Run the server:

```
python manage.py runserver
```

๐ Youll see the server startup and direct you towards http://127.0.0.1:8000.

When you run `startproject`

, four files are generated: `manage.py`

, `settings.py`

, `urls.py`

, and `wsgi.py`

.

- ๐ The
`manage.py`

file**replaces**the`django-admin`

command we used earlier; it is your**main entry point**into your application and runs any management commands related to our project, such as creating an admin user. It is extremely unlikely that you will need to modify this file ๐ซ . - ๐ The
`settings.py`

file contains all of our**project's settings**๐, such as the project's name, the URL it will live on, and so on. - ๐ The
`urls.py`

file is the one that contains the**routing information**for all Django requests; it maps various URLs to their corresponding views ๐ฏ . - ๐ When you deploy your project to a server, the
`wsgi.py`

file is used. It enables a WSGI HTTP server, such as*Gunicorn*, to mount your application and allows HTTP requests ๐.

I hope you have learned a lot from this article. For learning about a front-end web framework, check out our article on Introduction to React . Watch out for more helpful content and be sure to stay connected on our social media handles!

Happy Learning! ๐

Once again, here we are with the 5 more

**Question:-** Given a linked list, determine if there is a loop or not.

A linked list without a loop looks simply like this:-

While a linked list with loop looks like this:-

All that we have to do is to determine the presence of a loop. It is very easy๐ with a dictionary in python . Let us see how:-

Working of the code:-

Writing the code for this question is as easy as understanding the logic:-

Can you guess the time complexity? Yess!๐ It is O(n).

**Question:-** You will be given an array of integers where you have to find the largest k elements in the array.

For example, let us say the array is:-

Then k largest elements would be:-

You might ask, Sorting works, right?. Yes, It definitely works.

But what if I say there is a better solution. By saying better, I mean better time complexity.

We know sorting takes O(nlogn) time. But we can use heaps to reduce time complexity. So lets dive into the solutions.

** Max-heap** for finding largest k elements:-

Basically, max-heap is a tree where a parent is always greater than its children. So the root of the tree is the greatest element.

Let us see how this works for an example:-

After construction of max-heap:-

Popping k elements from the heap:-

Code to implement max-heap:-

We have reversed the sign of elements because the default heap in python is min-heap.

Now let us look at the ** min-heap**. You might be wondering why min-heap. But min-heap makes the job of finding k largest elements faster.

Let us see the steps involved in this method followed by an example:-

Min-heap of first k elements:-

Once the min-heap is built, we start comparing each element with root as shown:-

To give you an idea, why we are doing this, let us consider one instance in the above simulation of the heap:-

In our heap, there are k elements, and we know the minimum element among them (3). Now we found another element (8) that is greater than the root, so there are no chances for the root to be among the k largest elements; hence we replace it.

In this way, we build the heap that contains the k largest elements. And the root of the heap is the kth largest element of the array. :-

The final step is to pop all the elements from min-heap:-

It is interesting๐คฉ, right? Let us write the code for it.

Which method do you think is easy? (min or max heap?) Let us know in the comments.

**Question:-** If you have inorder and preorder traversals, construct the binary tree
Firstly what are inorder and preorder traversals? They simply define the order in which the nodes have to be displayed. There are three such traversals.

Before jumping into the code, let us try to guess the inorder and preorder of different Binary Trees.

Example Binary Tree 1:-

Preorder:- 5 2 1 3 4

Inorder:- 1 2 5 4 3

Example Binary Tree 2:-

Preorder:- 5 2 1 3 4

Inorder:- 1 2 5 3 4

Did you observe that the preorder for the above two trees is exactly the same, yet the trees are different? Hence we can not form a tree only based on preorder.

Example Binary Tree 3:-

Preorder:- 1 2 4 3 5

Inorder:- 4 2 1 5 3

Now we are ready to reverse the process, i.e., we are going to construct the binary tree based on traversals.

Recursive steps involved in the construction of tree:-

Explanation:-

The first element of preorder is the root

In inorder, the left part of the root is the left subtree, and the right part is the right subtree

Now, we call the function recursively by passing both inorder and preorder of left and right subtrees as parameters.

Recursion is easy once we understand the depth of recursion. Let us build the recursive function in the following steps:-

Given preorder and inorder, we should be able to convert it into a tree:-

Let us see how we can implement our logic:-

First, we search for 1 (1st element in preorder) in inorder and divide inorder into left and right subtree, and then according to the number of elements left side in inorder, we also divide the preorder (3 each) .

Now we repeat๐ the process for the left and right subtrees.

Putting all this together will give us the code:-

Output tree for the sample input is:-

What next? We have covered linked lists, heaps, trees. Yeah! Its graphs, one of the topics every student preparing for Amazon must know.

**Question:-** Given a matrix and word, find out if you can form the word from sequentially adjacent characters of the matrix

Let us examine the question carefully.

If the given matrix is:-

Word search:-

If the word given is RABS, our answer would be yes.

Similarly, we can find words like AIPROBABLY, BLOG, DLIP, ROB etc.

And we can not form words like BAG, BOAS, etc

*Our approach:-*

We are going to build a graph with each character as a node, and the possible moves (up, down, right, left) from that character will be the edges of the graph.

Then the total graph looks like this:-

Now that we formed the graph, each path in the graph is a possible string. So for the word given, we will find๐ whether there is such a path in the graph or not.

This can be done recursively by starting from the node that matches with the first1 character of the word. Then, we will move character by character to form the string using the possible moves.

Here is the code that implements the above logic:-

**Question:-** Given an array of integers form a wavy array
A wavy array is similar to:-

The array should increase from 1st to 2nd, decrease from 2nd to 3rd and then increase from 3rd to 4th element and so on.

For example, if the given array is:-

The wavy array is:-

Let us see how we can obtain this wavy array:- Initially, let us sort the array:-

Now we simply exchange the adjacent values, and that gives us the wavy array as shown:-

And this is how the code looks:-

We are delighted to extend our support and hope your efforts get paid off๐. All the best!!

Also read: Top DSA Questions asked at Google

]]>`you need to test the drug on enough people to be sure that the result is not out of randomness or by chance but out of the efficiency of the drug.`

You will start by testing the drug on a sample of the population of people.
**Population:**

A population refers to the complete group of data about which the conclusions are to be drawn. The word population extends its meaning from just humans to objects, species, and measurement of anything that has common traits.

**Sample:**

A sample is a small part of the population that is generally random. The sample is the smaller representation of the population which is tested to conclude the population.

Having said that, we are ready to understand ๐what is a hypothesis.

In statistics, a hypothesis is a statement made on the nature of a population which is testable via experiment or observation.

Let us take an example to understand this complete concept. Given below is some observed data ๐ of two medicines that cure the same disease,

We can see that medicine A has cured 8 out of 10 people, and medicine B has cured just 6 out of 10 people. What hypothesis can we interpret from this data?

After seeing the data, we can say that medicine A is 20% more efficient than medicine B. If this is correct , then we must be able to achieve similar results from further experimentation. `For a hypothesis to be acceptable, it should give similar results as that of the preliminary results.`

This is where we are required to test a hypothesis.

Let us perform the same experiment on another ten different people and see if our hypothesis is falsifiable.

This result was not what we were expecting ๐ฒ. We conducted the same test on ten different people, and the results are very different. In fact, they are completely opposite of our preliminary results.

At first, the hypothesis we made was that med A is 20% better than med B, but it seems that is not the case. There can be a chance that all the people who took med B were healthier ๐ช than people who took med A, or the medicines might have been interchanged ๐.

So, we to assume one of the following possibilities as a factor that is affecting the results,

๐ The medicines might have been mislabelled or interchanged.

๐ People who are not cured by medicine A may be weak and can have other disease/s.

๐ People who are cured by medicine B may be healthier than those who took medicine A, i.e., they eat healthily and exercise regularly.

We should try this experiment few more times and observe the results.

This time, we checked that the medicines are not mislabelled and every other tiny detail. Still, all the experiments show results on the flip side of our preliminary results. Med A is not 20% better than med B, and we are confident that this is not due to random factors.

`Hence, we can confidently reject this preliminary hypothesis as we cannot prove it consistently in the experiments.`

On the other hand, if the results were not different enough ๐ for us to be confident that med A is not 20% better than med B, i.e., the results were similar, say 19% or 21% or 20.5%, we can say that we have `failed to reject the hypothesis.`

There are basically two types of hypothesis, namely, **Null Hypothesis** and **Alternate Hypothesis**. Lets look at each of them.

A null hypothesis (denoted by H0) represents the statement that is exactly the opposite of what we want to prove.

In other words, it states that there is no difference or no direct relationship between the two variables. For example,

We try to prove that the null hypothesis is false. This brings us to the second type.

An alternate hypothesis (denoted by H1) is a statement that is a potential outcome of what we want to prove. In other words, it is a statement made in counter to the null hypothesis. An alternate hypothesis works to verify if there is enough change to reject the null hypothesis.

๐ก Null hypothesis- The accepted fact for something

๐ก Alternate hypothesis- Your theory for that thing

Let us take the following example.

Fig. 1 shows three people who took med A and three ๐ฆ๐จ๐ฉ people who took med B. Fig 2. shows the meantime of recovery i.e. 40 hours. For people who took med A, we can think that person 1 is healthier ๐ช than the rest who took med A. And for people who took med B, person 3 took the longest time to recover, might have some allergies because of which it took him long to recover than the rest of the people who took med B. If this case is considered then the mean recovery time of each med A and med B would be very small. But from the data, when we calculate individual means of med A and B, the difference is significantly more. Based on this, the hypothesis would be:

- H1: There is a difference between med A and med B (as the difference of individual mean is different)

But still, even if there is such a case, the difference between the two means is way too large for us to fail to reject the null hypothesis. If the difference was 1 hour or 2 hours, we could not reject the null hypothesis because we know that not every observation is ๐ฏ% error-free, and some random factors (as we mentioned above) might affect the results. But the difference is significant from what we observed in this case. Therefore, we can confidently reject the null hypothesis.

This question might have popped into your mind. The answer is the `p-value`

.

As we have seen earlier, ๐ `when we have enough evidence to prove that a null hypothesis is incorrect, we say that we have rejected the null hypothesis. And when we dont have enough evidence, we say that we have failed to reject the null hypothesis.`

But how much value is enough? For this, the p-value and significance value is used.

A p-value (also known as calculated probability) is a value between 0 and 1. In simple terms, it is used to check if enough evidence is available to reject the null hypothesis.

This p-value is checked against a threshold value called significance value(). The significance value is predetermined; **the closer it is to 0, the more significant is the result.**

๐ Generally, the p-value is taken as **0.05**, which states if there is no difference and when we perform an experiment `n`

number of times, there is a `5%`

chance that the results will be wrong. When we need to be highly confident to reject the null hypothesis, 0.001 or 0.1% is used as the significance value.

I hope you have got a clear idea ๐ก of **what is hypothesis testing**.
Also for live projects and technical videos, check out our๐ YouTube channel and don't forget to like, share and subscribe.

**Check out our latest video:**
How to make a personal portfolio using Github pages

Getting a job at **Amazon** is a dream ๐ญ that you would love to achieve ๐. Being an online retail giant, Amazons recruitment process is a tight race. You will need to solve their unique ** Data Structure and Algorithms questions ** to stand out from the competition ๐.

This blog will take you through the **Top 5 Data Structure and Algorithm questions** asked in Amazon interviews. Though 5 is a small number, these questions cover a wide range of topics in DSA, which are explained using *python codes* for simplicity.

**Question:-** You will be given an array of non-negative integers, where each array element represents the maximum length of the jump from that position. By starting from the 1st element, you have to find out if you can reach the end of the array or not.

Here if you are in 1st position, you can jump 1 step ( from 1 to 2). For the 2nd position, the maximum length of the jump is 2 ( you can either jump 1 or 2 steps). For 3rd, it is one and so on.

**Solution:-**

Let us look at a few test cases and check if we are able to reach the end or not.

Since the first element itself is 0 here, we cant jump anywhere. ( so we will just sit ๐ช there )

Apart from arrays that start with 0, there are cases where we end up at zero from all the possible paths, as shown below.

What if there are no 0s in the array? We can definitely reach the end ๐. ( even for the minimum case `[1, 1, 1, 1, 1]`

).

Can you think of cases where there are 0s, but we can still reach the end?

Here are a few:-

We can jump๐ฆ over zeroes if the number(s) before 0(s) can reach beyond them.

Hence we keep track of maximum reach, and if we encounter a zero thereafter, that means we can not move forward after that, so we return false.

Lets implement the same logic:-

`2 2 0 1 2 0 1 1 4`

Sample output:- `True`

Try debugging the code yourself and post your doubts through comments ๐ฌ if any.

Pandas techniques used in Data Science...

**Question:-** Say you have an array where each element represents the stock price for that day. **Design an algorithm** to find the maximum profit.

Note:- You can not engage in multiple transactions at a time (i.e., Complete one transaction before starting another).

Explanation:-

Consider the above array. Here, the stock price on day 1 is `100`

( whether you buy or sell it). For day two the price is `400`

, for 3rd day it is `500`

and so on.

For example, if you buy the stock on day 1(for 100), sell that stock on day 2(for `400`

). Then the profit would be `300`

(`400 - 100`

). Similarly, if you buy a stock on day 4 and sell it on day 5, the profit would be `100`

(`300 - 200`

). What if we buy a stock on the 3rd day and sell it on the 4th day? It is a loss of `300`

(`500 - 200`

). Hence we do not perform such transactions.

Summing up all the facts together:-

Given an array A, we need to find the maximum value of `A[j] - A[i]`

where `j > i`

. We can achieve this simply by comparing the adjacent values.

Let us see how:-

If we buy a stock on the 1st day and sell on the 3rd day, the profit is `400`

(`500 - 100`

).

We will get the same profit by performing two transactions.

Buy on day 1 and selling on day 2.

Buy on day 2 and selling on day 3.

The profit is (`400 - 100`

) + (`500 - 400`

) =`400`

.

Hence our approach is to check if we can obtain profit for all the adjacent prices. ( if the difference between a value and its previous one is greater than 0). If yes, we will add this profit to the total profit.

Now that we figured out what is expected out of the problem. Lets start coding.

Sample input:- `100 400 500 200 300`

Sample output:- `500`

This is how we get `500`

:-

(`400 - 100`

) + (`500 - 400`

) + (`300 - 200`

) = `500`

.

After seeing the code, try to guess the outputs for the following test cases:-

Output:- 865

Output:- 0 ( profit is not possible if the array is in decreasing order)

Output:- 350 (160 + 50 + 60 + 80)

Great job ๐! Lets move to the next question.

**Question:-** If you are given a reference of a node in a connected undirected graph, return a deep copy (clone) of that graph.

In simple words, we just have to create the exact replica of the original graph.

If the original graph is:-
The cloned graph is:-

Here each node in the graph contains a value and list of neighbors. We will be given the first node, and we must return the copy of the given node as a reference to the cloned graph.

**Rules:-**

- Do not return the original graph.
- No. of vertices and edges remain the same.
- Make the same connections as the original graph.

**Solution:-**

What comes to your mind๐ง when you think of graph traversals? Its Breadth-First Traversal and Depth First Traversal right. Yeah, these are the most commonly used graph traversal techniques, and were gonna use one of them to tackle this problem.

We are using Depth First Traversal here as it is quicker and makes it easy to find child nodes in a graph. As we perform Depth First Traversal on the graph, we create a copy of each node and store it in a dictionary to avoid cycles.

This is how the keys are mapped to nodes (using a dictionary). And the node and list of neighbors are encapsulated (using class).

We only visit the nodes that are not present in the neighbors list. This helps in avoiding cycles and also storing nodes and their neighbors in sequential order.

Each node has a list of neighbors; this data is then used to create a replica of the original graph.

Here is the code that takes the node of the original graph as a parameter and returns the node of cloned graph:-

**Question:-**

**Design and implement LRU cache**, which supports get and set operations.

**Solution:-**

Before we jump into the solution, let us understand every term in the question.

Assume that initially, we have many page requests and cache (memory) that can hold only some pages.

Let us say our cache size is n, which means cache can accommodate n no. of pages. And initially, the cache is empty. Then we start adding one page after another.

Among all the elements, the least recently used is 1. If we encounter another page, say 5, remove 1 and insert 5.

- The new page is not present in the cache ( as shown above). We call this case a page fault. To handle this, set() operation is used.
- The new page is present. This case is known as a hit. get() operation handles hit.

*Details of get operation:- *

To check if the page is present or absent, we use the method get. If the page is not present in the cache, get returns -1. Then we call the function set to replace the least used page.

If the page is already present, we send the page to the end of the cache, by making it the most recently used page. Let us say our cache contains `[ 1, 2, 3, 4]`

, and then we encounter 2. Then 2 is sent to the end, making the cache `[ 1, 3, 4, 2]`

.

Let us put this logic into code.

**Question:-**Lets say we have a sequence of natural numbers from 1 to n. Given the value of n, **find the kth permutation sequence**.

**Solution:-**
Sequence of natural numbers:-

`n=3`

. 6 sequences are possible. And if

`k=4`

, we have to find the 4th subsequence.How do we find the kth permutation?

Generating all permutations wont help here because of the time complexity.

Instead, we use recursion and math to figure out the numbers depending on the position.

What is the role of position in finding kth permutation?

Lets take the help of the above example to answer this question.

In the above example (`n=3`

), the first two sequences start with 1, the next two with 2, and the last two with 3.

`(4-1)!`

i.e., 3!), each starting with 1,2,3, and 4.
`(n-1)!`

sequences that start with a specific number.
What about the second number? Simple, it is `(n-2)!`

.In this way, we will find the position of the number using the formula `k /(n-i)!`

, where i is the position and k is zero-based. And as we move from `n-1`

to `0`

, we keep removing the numbers which are already used.

For example, when `n=3`

and `k=4`

. This is how the values change for each iteration.

`n = 5, k =16`

Output:- `14352`

Hope this blog helps you to understand the **interview questions that have been asked for Data Structure and Algorithm** for getting jobs on Amazon. Also, check out our YouTube channel.๐ where we post videos about Data Science, Python, Django and many more.

Also read - Top 5 DSA interview questions asked at Google

]]>**Pandas ** is data manipulation and analysis library written using Python. It has a plethora of functions and methods to help speed up the data analysis process. Pandas popularity stems from its **usefulness**, **adaptability**, and **straightforward **syntax๐คฉ.

Over the years, Pandas has been the industry standard๐ญ for data reading and processing. This blog includes the top ten **Pandas techniques** that will be more than useful in your data science projects.๐ฏ

*How to start with Data structures & Algorithms*

Exploratory Data Analysis is done to summarise the key๐ characteristics and to better understand the data set๐. It also helps us rapidly evaluate the data and experiment with different factors to see how they affect the results. One of the important analyses is the **conditional selection of rows** or **data filtering**. There are two methods to do this.

๐ The examples shown below utilizes a data set that specifies different properties of cars๐. Take a look at the data set:

```
import pandas as pd
path = "C:/Users/aakua/OneDrive/Desktop/Toyota.csv"
df = pd.read_csv(path,na_values=["??","###","????"], index_col=0)
df.head()
```

`Output:`

The Pandas `DataFrame.loc[]`

property allows you to access a set of rows and columns in the supplied data set by label(s) or a boolean array.

๐ For this example, we will filter the rows with `Age <=25`

and `FuelType as Petrol`

.

```
data = df.loc[(df['Age'] <= 25) & (df['FuelType'] == 'Petrol')]
data[:8]
```

`Output:`

The query( ) method is used to query the columns of a DataFrame with a boolean expression.๐The only significant difference between a **query **and a **standard conditional mask** is that a string can be regarded as a conditional statement in a query. In contrast, a conditional mask filters the data with booleans and returns the values according to actual conditions.

๐ Here, we have filtered the rows that have `KM`

values between 15000 and 30000.

```
data_q = df.query('15000.0 < KM < 30000.0')
data_q[:8]
```

Pandas provide two methods for sorting the data frame: `sort_values( )`

and `sort_index( )`

. ๐ฉThe sorting order can be controlled.

This function is used to sort one or more columns of the Pandas DataFrame. We have sorted the data frame in ascending order according to the `Price`

column.

```
df.sort_values(by = 'Price')[:8]
```

`Output:`

This is used for sorting Pandas DataFrame by the row index.

```
df.sort_index(axis=1, ascending = True)[:8]
```

`Output:`

A GroupBy operation involves some combination of **splitting the object**, **applying a function**, and **combining the results**. It may be used for grouping vast quantities and computing operations on them.

๐ **A.** Let us say we want to look at the average price of the cars according to different `FuelType`

, such as CNG, Petrol and Diesel. Take a moment and think about the problem๐ค.

GroupBy function can quickly implement this! We will first split the data according to the `FuelType`

, and then apply the `mean( )`

function to the `price`

.

```
df.groupby(['FuelType'])[['Price']].mean()
```

`Output:`

๐ **B.** Now, we will group the data as before, but we will also compute the `mean`

of the `Age`

of cars.

```
df.groupby(['FuelType'])[['Price', 'Age']].mean()
```

`Output:`

๐ **C. **What if you want to group the data according to more than one column?๐ง Heres the solution:

```
df.groupby(['FuelType','Automatic'])[['Price', 'Age']].mean()
```

`Output:`

See, that was easy!๐ Let us explore the next one now.

Another essential data manipulating technique is **data mapping**. Initially, as a mathematical concept, mapping is the act of producing a new collection of values, generally individually, from an existing set of values. ๐The biggest advantage of this function is that it can be applied to an entire data set.๐

The Pandas library offers us the `map()`

method to handle series data. For mapping one value in a set to another value depending on input correspondence, Pandas `map()`

is employed. This input may be a series or even a dictionary.

๐ Let us map the `FuelType`

variable with **Vehicle Type**. For instance, `CNG`

will be mapped with **Hatchback**, `Petrol`

with **Sedan**, and `Diesel`

with **Van**.

```
# dictionary to map FuelType with Vehicle
map_fueltype_to_vehicle = { 'CNG' : 'Hatchback',
'Petrol' : 'Sedan',
'Diesel' : 'Van'}
df['Vehicle'] = df['FuelType'].map(map_fueltype_to_vehicle)
df[45:55]
```

`Output:`

I think, after reading their titles, there is no doubt about the goals๐ฏ of these two approaches; nonetheless, here are the definitions:

**a. nsmallest( )**: The Pandas `nsmallest( )`

technique is used to obtain `n`

least๐ values in the data set or series.

**b. nlargest( )**: The Pandas `nlargest( )`

function obtains `n`

largest๐ values in the data set or series.

๐ Let us examine how the five observations with the `least`

and `most`

value would be found in the `Price`

column:

```
df.nsmallest(5, "Price")
```

`Output:`

```
df.nlargest(5, "Price")
```

`Output:`

**a. idxmax( ): ** Pandas `DataFrame.idxmax( )`

gives the index of the first occurrence of maximum value across the specified axis. All NA/null values are omitted when determining the index of the greatest value across any index.

**b. idxmin( ):** Pandas `DataFrame.idxmin( )`

gives the index of the first occurrence of minimum value across the specified axis. All NA/null values are omitted when determining the index of the least value across any index.

๐ Let us say we want to access the indices of the maximum and minimum values in the `Price`

column based on the `FuelType`

. This can be done in the following manner:

```
df.groupby(['FuelType'])[['Price']].idxmin()
```

`Output:`

```
df.groupby(['FuelType'])[['Price']].idxmax()
```

`Output:`

While performing different functions on the data set, as explained above, grouping data becomes helpful. However, what if we want to save the grouped data as a new file in our system? ๐ค๐ง

Pandas offer functions that can come in handy in such situations๐, that can save a DataFrame as a file in various extensions like `.xlsx`

, `.csv`

, etc.

```
data_q = df.query('15000.0 < KM < 30000.0')
data_q.to_csv('Query_Data.csv')
```

`Output: File Saved`

๐ To save the data frame in **Excel ** use: `to_excel( )`

๐ To save the data frame in **JSON ** use: `to_json( )`

The data set sometimes includes null values that are shown in the DataFrame as NaN afterward.

**a. dropna( ): ** The `dropna( )`

method in Pandas DataFrame is used to eliminate rows and columns containing Null/NaN values.

```
df.shape
```

`Output: (1436, 9)`

```
data1 = df.dropna()
data1.shape
```

`Output: (1097, 10)`

**b. fillna( ):** `fillna( )`

, unlike the pandas `dropna( )`

method, handles and removes Null values from a DataFrame by allowing the user to substitute NaN values with their own.

๐ Before eliminating null values, the data set looked like:

```
df.head(10)
```

`Output:`

๐ I have substituted the null values with `0`

. After using the `fillna( )`

method:

```
df = df.fillna(0)
df.head(10)
```

`Output:`

Pandas `DataFrame.corr( )`

returns the pairwise correlation of all columns in a DataFrame. Any `NA`

values are immediately filtered out. It is disregarded for any non-numeric data type columns in the DataFrame.

```
df.corr()
```

`Output:`

๐ฉSome key points to note about **correlation values:**

๐ The correlation values fluctuate between `-1`

and `1`

.

๐ `1`

implies that the association is `1`

to `1`

**(perfect correlation)**, and one value in the second column has risen every time a value has been increased in the first column.

๐ `0.9`

indicates a **strong** connection, and the other is similarly likely to rise if you increase one value.

๐ `-0.9`

would be exactly as good as `0.9`

, but the other would definitely decrease if you increase one value.

๐ `0.2`

does NOT imply a good correlation, which means that if one value goes up does not imply the other goes up too.

๐ I believe, it is reasonable to claim that the two columns have a decent correlation if the values are between `-0.6`

and `0.6`

.

The `apply( )`

function is used to implement **arithmetic **or **logical **code over an entire data set or series type using a Python function. The function to be applied can be an **inbuilt **function or a **user-defined** function.

**a. Applying an inbuilt function: **

```
import numpy as np
df['Price'].apply(np.sqrt)[:8]
```

`Output:`

**b. Applying a user-defined function: **

```
def fun(num):
if num <10000:
return "Low"
elif num >= 10000 and num <25000:
return "Normal"
else:
return "High"
new = df['Price'].apply(fun)
new[105:115]
```

`Output:`

You will surely have the advantage of knowing these approaches from Pandas in the field of data science๐คฉ. I hope you liked and found this blog useful!๐

Subscribe to our YouTube channel for more great videos and projects.

]]>In MS-Paint, when we take the brush ๐ to a pixel and click, the color of the region of that pixel is replaced with a new selected color. Following is the problem statement to do this task.

Given a 2D screen, location of a pixel in the screen and a color, replace color of the given pixel and all adjacent same colored pixels with the given color. For example,

Question:

Solution:

**Example:**

```
Input:
screen[M][N] = {{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 0, 0},
{1, 0, 0, 1, 1, 0, 1, 1},
{1, 2, 2, 2, 2, 0, 1, 0},
{1, 1, 1, 2, 2, 0, 1, 0},
{1, 1, 1, 2, 2, 2, 2, 0},
{1, 1, 1, 1, 1, 2, 1, 1},
{1, 1, 1, 1, 1, 2, 2, 1},
};
x = 4, y = 4, newColor = 3
The values in the given 2D screen indicate colors of the pixels.
x and y are coordinates of the brush, newColor is the color that should replace
the previous color on screen[x][y] and all surrounding pixels with the same color.
Output:
Screen should be changed to following.
screen[M][N] = {{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 0, 0},
{1, 0, 0, 1, 1, 0, 1, 1},
{1, 3, 3, 3, 3, 0, 1, 0},
{1, 1, 1, 3, 3, 0, 1, 0},
{1, 1, 1, 3, 3, 3, 3, 0},
{1, 1, 1, 1, 1, 3, 1, 1},
{1, 1, 1, 1, 1, 3, 3, 1},
};
```

**Ans:**

This question can be solved by using **Recursion** ๐.
The concept is simple: here we first replace the current pixel's color, then recur for four surrounding points. The following is a detailed algorithm:

Here, first change 2 to 3 and then check for surrounding pixels

There is a recursive function to replace the previous color 'prevC' at '(x, y)' along with all the surrounding pixels of (x, y) with the new color 'newC' and floodFill(screen[M][N], x, y, prevC, newC).

If x or y is outside the screen, then return.

If color of screen[x][y] is not same as prevC, then return.

Recur for north, south, east and west :

floodFillUtil(screen, x+1, y, prevC, newC);

floodFillUtil(screen, x-1, y, prevC, newC);

floodFillUtil(screen, x, y+1, prevC, newC);

floodFillUtil(screen, x, y-1, prevC, newC);

**Code:**

In a party of N people, **only one** person is known to everyone. Such a person may be present in the party, if yes, (s)he **doesnt know** anyone in the party. We can only ask questions like does A know B? . Find the stranger (celebrity) in the minimum number of questions.

We can describe the problem input as an array of numbers/characters representing persons in the party. We also have a hypothetical function HaveAcquaintance(A, B) which returns true if A knows B, false otherwise. How can we solve the problem?

**Example:**

```
Input:
MATRIX = { {0, 0, 1, 0},
{0, 0, 1, 0},
{0, 0, 0, 0},
{0, 0, 1, 0} }
Output:id = 2
Explanation: The person with ID 2 does not
know anyone but everyone knows him
Input:
MATRIX = { {0, 0, 1, 0},
{0, 0, 1, 0},
{0, 1, 0, 0},
{0, 0, 1, 0} }
Output: No celebrity
Explanation: There is no celebrity.
```

**Ans:**

This problem can be solved using ๐ **Recursion**. If the potential celebrity of N-1 persons is known, then the solution to N be found from it. A **potential celebrity** is one who is left after removing n-1 people. The n-1 people are eliminated with the strategy given below:

If A knows B, then A cannot be a celebrity. But B could be.

Else If B knows A, then B cannot be a celebrity. But A could be.

The approach, as mentioned above, uses Recursion to obtain the potential celebrity amongst n persons, recursively calls n-1 persons, until the base case of 0 persons is reached. For 0 persons, -1 is returned, which indicates that there are no probable celebrities as there are 0 people. In the ith stage of Recursion, the ith person and (i-1)the person are compared to check if they know the other. Using the above logic, the potential celebrity is returned to the (i+1) stage.

As soon as the recursive function returns an id, we will check if this id does not know anybody else, but all the others know this id. If this is true, then this id will be the celebrity.

**Algorithm : **

Write a recursive function that takes an integer n.

Check the base case; if value of n is 0, then return -1.

Call the recursive function and get the ID of potential celebrity from the first n-1 elements.

If the id is -1, then allot n as the possible celebrity and return the value.

If the possible celebrity of the first n-1 elements knows n-1, then return n-1 (0 based indexing).

If the celebrity of the first n-1 elements does not know n-1, then return id of the celebrity of n-1 elements (0 based indexing).

If not, then return -1.

Then create a wrapper function to check whether the id returned by the function is the celebrity or not.

**Code:**

Given a knapsack weight W and a set of n items with certain value

val[i]

and weight

wt[i]

we need to calculate the **maximum** amount that could make up this quantity *exactly* . This is different from the classical Knapsack problem, here we are allowed to use an **unlimited** number of instances of an item.

**Example:**

```
Input : W = 100
val[] = {1, 30}
wt[] = {1, 50}
Output : 100
There are many ways to fill knapsack.
1) 2 instances of 50 unit weight item.
2) 100 instances of 1 unit weight item.
3) 1 instance of 50 unit weight item and 50
instances of 1 unit weight items.
We get maximum value with option 2.
Input : W = 8
val[] = {10, 40, 50, 70}
wt[] = {1, 3, 4, 5}
Output : 110
We get maximum value with one unit of weight 5 and one unit of weight 3.
```

**Ans:**

It is an unbounded knapsack problem as 1 or more instances of any resource can be used. We will use a simple 1D array, say dp[W+1] such that dp[i] will store the maximum value which can be achieved using all the items and the i capacity of knapsack. Note that 1D array is used here which differs from the classical knapsack where we use 2D array. Here, the number of items never changes. We will always have all items available.

We can recursively compute dp[] using below formula:

dp[i] = 0 dp[i] = max(dp[i], dp[i-wt[j]] + val[j] where j varies from 0 to n-1 such that: wt[j] <= i result = d[W]

**Code:**

Given two strings, the task is to check whether these strings are meta strings or not. Meta strings are the strings which can be made equal by exactly one swap ๐ in any of the strings. Equal strings are not considered here as Meta strings.

**Example:**

```
Input : str1 = "geeks"
str2 = "keegs"
Output : Yes
By just swapping 'k' and 'g' in any of string,
both will become the same.
Input : str1 = "rsting"
str2 = "string
Output : No
Input : str1 = "Converse"
str2 = "Conserve"
```

**Ans:**

First, check if both the strings are equal in length or not, if not then return false.

Otherwise, start comparing both the strings and count the number of unmatched characters. Also store the index of unmatched characters.

If unmatched characters are greater than 2 then return false.

Otherwise check if swapping any of these two characters in any string would make the string equal or not.

If yes then return true. Otherwise return false.

**Code:**

A number is called a Jumping Number if all adjacent digits in it differ by 1. The difference between 9 and 0 is not considered as 1. All single digit numbers are considered as Jumping Numbers. For example 7, 8987 and 4343456 are Jumping numbers but 796 and 89098 are not.

Given a positive number x, print all Jumping Numbers smaller than or equal to x. The numbers can be printed in any order.

**Example:**

```
Input: x = 20
Output: 0 1 2 3 4 5 6 7 8 9 10 12
Input: x = 105
Output: 0 1 2 3 4 5 6 7 8 9 10 12
21 23 32 34 43 45 54 56 65
67 76 78 87 89 98 101
Note: Order of output doesn't matter,
i.e. numbers can be printed in any order
```

**Ans:**

A straightforward solution is to traverse all the numbers from 0 to x. For each traversed number, check if it is a jumping number. If it is a jumping number, then print it. Otherwise, ignore it. The time complexity of this solution is O(x).

A better and efficient solution is to solve this problem in O(k) time, where k is the number of jumping numbers smaller than or equal to x. The approach is to use BFS or DFS . Assume that we have a graph where the starting node is 0, and we need to traverse it from the start node to all the reachable nodes.

**Code:**

So we have completed the 5 Data Structures and Algorithms questions that are repeatedly asked in Google interviews. If you liked todays blog, please share it with your friends as well!

Also read - Data Structure & Algorithms interview questions

For more such cool blogs and projects, check out our YouTube channel

]]>**What is ReactJS?**

ReactJS is an open-source JavaScript library maintained by Facebook and the developers' community. This library is widely used in developing beautiful user interfaces for the web.

ReactJS is used to design user interfaces, also called views. One of the plus points of ReactJS is that it takes care of the view and gives absolute control of the application for you to decide.

*Out of several tools and technologies available for web development, what makes ReactJS the favorite for many *

**Enhanced User Experience**๐ฅ

ReactJS focuses on user experience, unlike AngularJS and other technologies. It provides users with a highly responsive interface, all thanks to JavaScript interactions between the native environment of the device and ReactJS. This makes web apps user-friendly and fast.

**JSX**๐ป

JSX syntax is a blend of JavaScript and HTML. ReactJS uses JSX making it efficient and easy to code. JSX makes writing components easier, while HTML allows developers to render functions without concatenating strings.

Proper use of native APIs is the added advantage to ReactJS, while JavaScript adds cross-platform support to it.

**High efficiency**๐

ReactJS creates its own virtual DOM where all the components live, resulting in performance gains because ReactJS presumes the changes needed in the DOM and updates the DOM trees accordingly. This way, ReactJS saves costly DOM operations and makes the pages efficient.

**Reusable Components**

ReactJS offers the ability to reuse components of different levels. ReactJS components are isolated, and changes in one do not affect others. This reuse adds up to increased efficiency in coding and thus saves time.

**SEO Friendly**๐ค๐ป

For ages, JavaScript has been a no-go for search recommendations on search engines. Search engines often fail to understand JavaScript and hence degrades the SEO scores of the website. ReactJS solves this problem since it renders the page over a server, and the virtual DOM is rendered as a standard page. Search Engine Optimization is no longer a hassle with ReactJS.

**Popularity**๐

Have you ever wondered how apps like Facebook, Instagram, Netflix are all built? You guessed it right, it is ReactJS. All the major companies are using modern technology like ReactJS to make their user experience indulging and impressive. This makes ReactJS one of the most popular libraries and the first choice for many techies.

ReactJS is a popular choice among web developers due to its sheer coding pleasure ๐. Adding ReactJS to your skillset is a good choice and can make you land dream jobs.

Also read - JavaScript: Events and Event Listener

For more such cool blogs on technology, check out our YouTube channel.

]]>`X`

`X+1`

**This application of your knowledge, skills, or experience gained from one situation to another is transfer learning** ๐. Let us see how this works similarly for machines ๐ค.
๐ We'll try to cover up the following points in this blog:

|-- What is Transfer Learning

| -- How to use Transfer Learning

| -- Key questions to ask

| -- What to Tansfer

| -- When to Tansfer

| -- How to Tansfer

| -- Strategies of Learning

| -- Inductive

| -- Unsupervised

| -- Transductive

| -- Benefits of Transfer Learning

Machine Learning (ML) is a branch of

Computer ScienceandArtificial Intelligencethat includes training machines to automatically perform predictions by learning throughexperience.

Different algorithms are used to build a machine learning model. The machine learning model takes a cleaned dataset as input and learns from it by identifying ๐ต the patterns in the data. An ML model is selected based on the data which is available and the task to be accomplished . Machine learning models include linear regression, k-means clustering, decision trees, random forest, etc.

๐ **Applications of ML**: Weather forecasting, Image classification, Language translation, Recommendation systems, etc.

Deep Learningis a sub-domain of machine learning concerned withcreating neural networksto solve complex problems.

*Neural Network* is created with the help of different algorithms that, as a whole, *mimics the human brain* ๐ง . The deep learning neural network consists of the **input layer, hidden layer, and output layer**. You can see the various layers and their use in the image below.

You can view how to create a simple neural network using Tensorflow here.

๐ Generally, deep learning is used over machine learning when we are needed to perform very complex operations, which require high computing power, on a large amount of data.

Now lets get started with transfer learning.

Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned.

There are two terminologies that you must understand before proceeding further.

๐Terminologies used:

Transfer learning ๐is a type of machine learning ๐ปin which the output of one model serves as the input to another model belonging to the same domain. Transfer Learning is used to solve complex **Natural Language Processing (NLP)** ๐ค, **Computer Vision** ๐ท, **Reinforcement Learning** ๐ค, or **Deep Learning** problems ๐. It reduces the time and computational power spent on training a complete model with large datasets from scratch. One can take a pre-trained model as a base and create a task-specific model by using transfer learning.

This question must have come to your mind ๐ค, why can't we make a neural network from scratch? We can not build a model every time from scratch. Even if we have the data, training the model and optimizing it will take too long. Also, the requirement for computational power is very high. Therefore, using knowledge of a source task relative to the target task ๐ฏmakes more sense in practice.

๐ก This type of learning where one model reuses the knowledge of another model ๐คcan significantly increase the time is taken and efficiency of the output.

๐ There are two approaches to transfer learning which are mainly used:

๐ **Pre-trained model approach**: Many research institutions create models for a specific task that are trained with large and challenging datasets. One can `select a pre-trained model`

from a large number of available models specific to the task. Use this model as the starting point to induce learning in the target model and then optimize the resultant model as needed.

๐ **Develop a source model approach**: If we don't wish to go with a pre-trained model, we can also `create a source model`

specific to the task. This model must be feature enriched and should consist of features suitable for both the base and target models. Use this model as the starting point to induce learning in the target model and then optimize the resultant model as needed.

๐ก There are `three questions`

we have to answer to start with transfer learning.

๐ **What to transfer**: This question concerns what part of the knowledge from the source task is being taken for the target task. Some knowledge may be compatible for just a specific domain, while others may be consistent for more than one domain. We try to clarify `what knowledge of the source task can be transferred`

or reused to get a performance boost in the target task.

๐ **When to transfer**: The answer to this question directly relates to the quality of transfer that we will have in the end. We have already discussed that when the source and target domain are not the same, they must not be forced to do the transfer. Such transfer can yield an unsuccessful result. Transfer learning without understanding the domain and task can negatively influence the model's efficiency, known as a `negative transfer`

.

๐ **How to transfer**: Knowing what to transfer and when to transfer, we have to identify the various ways of actually transferring the knowledge, which may include altering the algorithms to make it fit with the target domain and, after doing so, tuning it for better efficiency.

๐ **Inductive Transfer Learning**- The source and *domains are the same* ๐in this type of transfer learning, but the *source and target tasks are different*. The source domain is required to use inductive bias to improve the target task. Based on whether we have labeled or unlabeled data, this learning can be divided into two categories, similar to multi-task learning and self-taught learning.

๐ **Unsupervised Transfer Learning**- This is similar to inductive transfer learning. The *source and target tasks are different* but are related. The data is unlabeled in both the source and target tasks. Unsupervised learning is used like clustering and dimensionality reduction in the target task.

๐ **Transductive Transfer Learning**- The *source and target tasks are the same* in this setting, but their *domains are different*. There is no data available in the target domain, while plenty of labeled data is available in the source domain. Transductive learning can be further categorized into two types based upon the feature spaces and marginal probability distribution.

**Higher Start**: As the base model is already trained, the target model has a higher start because of its knowledge at the beginning (unrefined stage).

**Higher Slope**: The transfer learned model has a better performance and learning curve because of its ability to find patterns or solve problems, whichever the use case, more efficiently.

**Higher Asymptote**: The combined capabilities of both the models result in a better asymptote than it usually would be.

๐ก Transfer methods tend to be highly dependent on the machine learning algorithms being used to learn the tasks and can often be considered extensions. Some work in transfer learning is in the context of inductive learning and involves extending well-known classification and inference algorithms such as **Neural Networks**, **Bayesian Networks**, and **Markov Logic Networks**. Another central area is reinforcement learning and involves extending algorithms such as **Q-Learning** and **Policy Search**.

I hope you have got a clear idea of what transfer learning is ๐. Check out our ๐YouTube channel for more such content.

]]>**Neural networks** are designed for deep learning algorithms to function. In other words, they are just striving to produce accurate forecasts๐, as any different machine learning model.๐ค But what distinguishes them is their capacity to process **massive amounts of data **๐ and predict the targets with **great accuracy!** ๐ฏ

Artificial neural networks are a collection of nodes, inspired by brain neurons, that are linked together to form a network.

In general, a neural network consists of an `input layer`

, one or more `hidden layers`

, an `output layer`

and is linked together ๐ to give outputs. The **activation function** shown here is a hyperparameter that will be addressed later in the blog.

`P`

and `Q`

are **input neurons** in the above ๐ diagram, and `w0`

and `w1`

are the **weights.**

๐ It is important to note that `w1`

and `w2`

measure the strength ๐ช of connections made to the center neuron, which sums() up all inputs. Here, `b`

is a constant known as the `bias`

constant. This is what it looks like mathematically:

**SUM = w1 * P + w2 * Q + b**

๐ As long as the sum exceeds zero, the output is **one or 'yes'**; otherwise, it is **zero or 'no.'** Each input is linked with a weight in a neural network.

The steepness of the activation function rises ๐ as the weight increases. In other words, weight determines how quickly the activation function will fire ๐ฅ, whereas bias is utilized to delay activation.

Like the intercept ๐ in a linear equation, bias is introduced to the equation. Hence, bias is a constant that aids the model to fit the provided data ๐ in the best possible way๐ฏ.

Implementing deep learning algorithms can be daunting๐, but thanks to **Google's TensorFlow** - which makes obtaining data๐, training models, serving predictions๐, and improving future outcomes more accessible๐คฉ.

**TensorFlow** is a free and open-source framework built by the Google Brain team for numerical computing ๐งฎ and solving complex machine learning ๐ค problems. TensorFlow incorporates a wide variety of deep learning models and methods through its shared paradigm๐บ. TensorFlow enables developers to construct dataflow graphs, which are data structures that represent how data flows across a graph or a set of processing nodes. Each node in the graph symbolizes a mathematical process, and each link or edge between nodes is a tensor, which is a multidimensional array.

We will be using a vehicle ๐ detection dataset from Kaggle. This is a binary classification problem. The idea ๐ก is to develop a model that can distinguish between pictures with and without cars ๐. For the execution, Kaggle notebooks are used; you may alternatively use Google Colab.

Click here to download the dataset!

๐ The dataset consists of `8792`

images of **vehicles** and `8968`

images of **non-vehicles.**
So, let us begin by importing the dataset:

```
import numpy as np
import pandas as pd
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
os.path.join(dirname, filename)
maindir = "../input/vehicle-detection-image-set/data"
os.listdir(maindir)
```

`Output: ['vehicles', 'non-vehicles']`

```
vehicle_dir = "../input/vehicle-detection-image-set/data/vehicles"
nonvehicle_dir = "../input/vehicle-detection-image-set/data/non-vehicles"
vehicle = os.listdir(maindir+"/vehicles")
non_vehicle = os.listdir(maindir+"/non-vehicles")
print(f"Number of Vehicle Images: {len(vehicle)}")
print(f"Number of Non Vehicle Images: {len(non_vehicle)}")
```

```
Output: Number of Vehicle Images: 8792 |
Number of Non-Vehicle Images: 8968
```

๐ Let us print an image from **vehicle_dir: **

```
import cv2
import matplotlib.pyplot as plt
vehicle_img = np.random.choice(vehicle,5)
img = cv2.imread(vehicle_dir+'/'+vehicle_img[0])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.xlabel("Vehicle")
plt.tight_layout()
plt.imshow(img)
plt.show()
```

`Output:`

There are several open-source libraries for computer vision ๐คand image processing๐. **OpenCV** is one of the largest. Using pictures and videos, it can recognize items, people, and even the handwriting of a human being.

When the user calls `cv2.imread()`

, an image is read from a file. Among the programming languages supported by OpenCV are Python, C++, and Java.

๐ We will also check an image in the **nonvehicle_dir: **

```
import cv2
import matplotlib.pyplot as plt
nonvehicle_img = np.random.choice(non_vehicle,5)
img = cv2.imread(nonvehicle_dir+'/'+nonvehicle_img[0])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.xlabel("Non-Vehicle")
plt.tight_layout()
plt.imshow(img)
plt.show()
```

`Output:`

๐ We now split the data into `train`

and `test`

datasets:

```
train = []
test = []
import tqdm
from tensorflow.keras.preprocessing import image
for i in tqdm.tqdm(vehicle):
img = cv2.imread(vehicle_dir+'/'+ i)
img = cv2.resize(img,(150,150))
train.append(img)
test.append("Vehicle")
for i in tqdm.tqdm(non_vehicle1):
img = cv2.imread(nonvehicle_dir+'/'+ i)
img = cv2.resize(img,(150,150))
train.append(img)
test.append("Non Vehicle")
```

`Output:`

๐ We can create console line progress bars and GUI progress bars with the help of the `tqdm`

library. We use these progress indicators to check if we are getting stuck someplace and work on it right away ๐ฏ.

```
train = np.array(train)
test = np.array(test)
train.shape,test.shape
```

`Output: ((17584, 150, 150, 3), (17584,))`

๐ The `train`

dataset contains **arrays of different images**, whereas the `test`

dataset consists of the** labels for each respective image**.

๐ Let us check the `train`

and `test`

datasets:

```
train[:2]
```

`Output:`

```
test
```

```
Output: array(['Vehicle', 'Vehicle', 'Vehicle', ..., 'Non Vehicle', 'Non Vehicle',
'Non Vehicle'], dtype='<U11')
```

๐ The `test`

data, unfortunately, consists of the **string** labels. First, we have to convert it into numeric data ๐ข; for instance, **vehicles** will be labeled as `1`

, and **non-vehicles** will be labeled as `0`

.

```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
test= le.fit_transform(test)
test
```

`Output: array( [ 1, 1, 1, . . . . , 0, 0, 0 ] )`

๐ Now this will make our life easier!๐ค

๐ The next step involves splitting the `train`

dataset into `x_train`

, `y_train`

, and the `test`

dataset into `x_test`

, `y_test`

. But before that, we need to **shuffle** the training and testing datasets. **Shuffle** is nothing more than rearranging the items of an array.

```
from sklearn.utils import shuffle
train,test = shuffle(train, test)
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(train,test,test_size=0.2,random_state = 50)
```

```
import tensorflow as tf
model = tf.keras.models.Sequential()
```

๐ The **Sequential API** is the simplest model to construct and run in Keras. A sequential model enables us to build models layer by layer.

๐ An important thing to note is that right now; the images are in a multidimensional array. We want them to be **flat** rather than **n-dimensional**. If we were to use something more powerful like `CNN`

, we might not require it. But in this case, we definitely want to flatten it. For this, we can use one of the layers that are in-built keras.

```
model.add(tf.keras.layers.Flatten())
```

๐ The following essential steps involve building the **layers**. We will create `five`

layers to obtain the result.

```
model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(64, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(32, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(16, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(2, activation = tf.nn.softmax)) # Output Layer
```

๐ The next most important thing is to decide the number of neurons in the hidden layers. We typically use **systematic experimentation** to determine what works best for our particular dataset. Typically, the number of hidden neurons should decrease in succeeding layers as we move closer to the pattern and identify the target labels.

๐ I have taken `128`

neurons for the first layer, and for the subsequent layers, I have used `64`

, `32`

, and `16`

neurons, respectively. You are free to experiment around these parameters. The output layer always consists of **the number of classifications**; in our case, it is `2`

. Sometimes, the last layer of a classification network is activated using `softmax`

and the output is a probability distribution of the targets.

๐ An **activation function** is utilized for the internal processing of a neuron. The activation function of a node describes the output of that node, given input or collection of inputs. We already know that an artificial neural network computes the `weighted sum`

of its inputs and then adds a bias. Now, the value of net output might range from `-Inf`

to `+Inf`

. The neuron does not understand how to bind the value and so cannot determine the firing pattern ๐ฅ. Thus, the activation function determines whether or not a neuron should be activated.

๐ There are many activation functions like `sigmoid`

, `tanh`

.** 'Rectified Linear'** or `relu`

is one of the activation functions utilized in deep learning models. If it gets any **negative input**, it returns `0`

, but if it receives any **positive input**, it returns that `value`

. As a result, it may be written as:

```
model.compile(optimizer = 'adam',
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
```

๐ We will use `adam`

as our optimization algorithm for iteratively updating the weights.

๐ **Epochs** - are the number of times our training dataset will pass through our neural network and are defined as a hyperparameter (a parameter whose value is used to regulate the learning process).

๐ **Categorical_crossentropy** - Each predicted value is compared to the actual output of 0 or 1, and a score/loss is computed depending on how much it differs from the true value.

```
model.fit(x_train, y_train, epochs = 10)
```

`Output:`

๐ You can notice that with each layer, the `loss`

is **decreasing**, and the model's `accuracy`

is **increasing**. The highest accuracy reached was approximately`92%`

.

๐ We can also evaluate this by:

```
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
```

```
Output: 110/110 [==============================] - 1s 12ms/step -
loss: 0.2342 - accuracy: 0.9338
```

```
predictions = model.predict([x_test])
print(predictions)
```

```
Output: [[8.3872803e-02 9.1612726e-01]
[7.1111350e-07 9.9999928e-01]
[1.9085113e-03 9.9809152e-01]
...
[9.9999988e-01 1.5221100e-07]
[7.8120285e-01 2.1879715e-01]
[3.9260790e-08 1.0000000e+00]]
```

๐ Wow, that looks like a mess!๐ต Let us try to simplify it:

```
y_pred = model.predict(x_test)
y_pred = np.argmax(y_pred,axis=1)
y_pred[:15]
```

`Output: array([1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1])`

๐ We can also crosscheck the **actual values vs. the predicted values**:

```
y_test = np.argmax(y_test,axis=1)
y_test[:15]
```

`Output: array([1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1])`

๐ Thats amazing; the values are almost matching! To get a better look, try:

```
plt.figure(figsize=(12,9))
sample_idx = np.random.choice(range(len(x_test)))
plt.subplot(2,5,i+1)
plt.imshow(x_test[sample_idx])
plt.xlabel(f"Actual: {y_test[sample_idx]}" )
plt.xlabel(f"Predicted: {y_pred[sample_idx]}")
plt.tight_layout()
plt.show()
```

`Output:`

```
print("y_test value: ",y_test[sample_idx])
print("y_pred value: ",y_pred[sample_idx])
```

```
Output: y_test value: 1 |
y_pred value: 1
```

We hope you liked and found this blog useful!๐คฉCheck out our other blogs below! Also, subscribe to our YouTube channel for more great videos and live projects.

]]>

We know that machines are incredibly skilled at processing numerical data ๐ข, but they become sputtering tools ๐ต when we give them text input. Ergo, we must transform the text into something that machines can interpret, which leads us to the term** word embeddings.** But first, let us acquire a few key pieces of information ๐ that will help us along the road.

Word embeddings are a numerical illustration of a text. Sentences and texts include organized sequences of information ๐, with the semantic arrangement of words communicating the text's meaning. Extracting meaningful `characteristics`

from a `text`

body is fundamentally different from obtaining `features`

from `numbers`

. The goal ๐ฏ is to develop a representation of words that capture their meanings, semantic connections, and the many sorts of situations in which they are employed.

An embedding contains a `dense`

vector of floating-point values. Embeddings allow us to utilize an efficient representation in which related words are encoded in the same manner. Based on the use of words, the distributed depiction is learned ๐.

Modern Natural Language Processing (NLP) ๐ค uses word embeddings that have been previously trained on a large corpus of text and are hence called 'Pre-trained Word Embeddings.' Pre-trained word embeddings are a type of ** Transfer Learning **. They are trained on large datasets ๐ that can enhance the performance of a Natural Language Processing (NLP) model because they capture both the `connotative`

and `syntactic`

meaning of a word. All of these word embeddings are useful during hackathons and in real-world situations.

As its name indicates, transfer learning is transferring the learnings from one job to the next. Knowledge ๐กfrom one model is utilized in another, increasing the models efficiency ๐๐ฏ. As a result of this approach, NLP models are trained faster and perform more efficiently.

We require pre-trained word embeddings for a variety of reasons like ๐analyzing survey responses, ๐feedbacks, etc. Nevertheless, why can't we just create our own embeddings from scratch instead of using pre-trained techniques? ๐ค๐คจ

The **sparsity of training data** and **multiple trainable parameters** make learning word embeddings from scratch is a tedious task ๐คฏ. Using a pre-trained model also reduces the computing cost and makes training NLP models faster ๐.

**Lack of training data**is a significant deterrent ๐. Most real-world issues have a substantial number of uncommon terms in their corpus. As a result of learning these embeddings from the respective datasets, the word cannot be represented correctly. This could end in a waste of time . A vector that has been`one-hot encoded`

is sparse (meaning, most records are zero). ๐Imagine that you have a vocabulary of 10,000 words in your arsenal. Basically, you would create a vector whose 99.99% of the components are 0.- As embeddings are learned from scratch,
**the number of trainable parameters**rises ๐. A delayed training process develops as a result. A word's representation can be confusing after learning embeddings from scratch.

**Pre-trained word embeddings** are the answer to all of the difficulties listed above๐. In the next part, ๐ฉwe will look at several word embeddings that have been pre-trained.

Google's Word2Vec is one of the most popular pre-trained word embeddings.

Tomas Mikolov created it at Google in 2013 to make neural network-based embedding training more efficient ๐ฏ; ever since it seems to be everyones favorite ๐ค pre-trained word embedding. The Google News dataset was used to train Word2Vec (about 100 billion words!).

Take a peek at the official paper to understand how Word2Vec found its way in the field of NLP!

Word2vec, as the name indicates, represents each word with a specific collection of integers known as a vector. The vectors are carefully calculated ๐งฎso that an essential mathematical function (the `cosine`

similarity between the vectors) shows the semantic relation between the words represented by those vectors.

๐A classic example of this is: If you take the man-ness out of King ๐คด and add the woman-ness, you get Queen ๐ธ, which captures the comparison between King and Queen.

**Continuous Bag-of-Words (CBOW model)** and **Skip-Gram Model ** are two distinct learning models used in Word2vec word embeddings. In contrast, the Continuous Bag of Words (CBOW) model learns the target word ๐ฏ from the adjacent words, whereas the Skip-gram model learns the adjacent words from the target word. As a result, Continuous Bag Of Words and Skip-gram are the exact opposites of one another.

๐A telling example of this is, take this simple phrase- **' Dean poured himself a cup of coffee. '** We want to understand the embedding for the word **'coffee'.** So here, the target word ๐ฏ is `coffee`

. Now, let us see how the models mentioned above interpret this sentence:

Both models attempt to learn about words in their appropriate use context using a window of nearby words. Using Word2Vec, high-quality word embeddings can be read efficiently ๐ฏenabling more significant embeddings to be learned from considerably larger corpora of text.

See, that was easy! ๐ค Let us explore the next one now.

GloVe is an unsupervised, count-based learning model that employs co-occurrence data (the frequency with which two words occur together) at a global level ๐ to build word vector representations ๐ข. The 'Global Vectors' model is so termed because it captures statistics directly at a global level. Developed by Jeffrey Pennington, Richard Socher, and Christopher D. Manning, it has acquired popularity among NLP practitioners owing to its ease of use and effectiveness ๐ฏ.

If you're searching for a thorough dig, here's the official paper!

In GloVe, words are mapped into a meaningful space where the distance between words is linked ๐ to their semantic similarity. It combines the properties of two model families, notably the global matrix factorization and the local context window techniques, as a log-bilinear regression model for unsupervised learning of word representations. Training is based on global word-word co-occurrence data from a corpus, and the resultant representations reveal fascinating ๐คฉ linear substructures of the word vector space.

**The question is: how can statistics convey meaning? ๐ค**
Examining the co-occurrence matrix is one of the most straightforward approaches to accomplish this goal. **How often do certain words appear together?** That is exactly what a co-occurrence matrix tells us! Co-occurrence matrices count the number of times a given pair of words appear together.

๐An illustration of how GloVe's co-occurrence probability ratios operate is provided below.

Where `Pi`

is the 'probability of ice' ๐งand `Pj`

is the 'probability of steam' .

`P(solid|ice)`

has a greater probability as ice is more comparable to solid than steam, as shown by `P(solid|steam)`

. A similar distinction may be made between ice and steam. `P(fashion|ice)`

and `P(fashion|steam)`

are independent probabilities, but they do not support the conclusion that there is no link between the two. So the concept behind GloVe is to represent words by considering co-occurrence probability as a ratio. `P(fashion|ice) / P(fashion|steam)`

almost equals one (0.96).

Raw probabilities cannot distinguish between important terms (solid and gas) and irrelevant words (water and fashion). The ratio of probabilities, on the other hand, can distinguish between the two relevant words more effectively.

๐ GloVe Embeddings are based on the principle that **" the ratio of conditional probabilities conveys the word meanings."**

where,

wi, wj words in context

wbar_k words out of context

Pik, Pjk derived from the corpus

As part of Facebook's AI research, fastText is a vector representation approach.

This quick and efficient technique lives up to its name. With a big twist ๐, the concept is quite similar to that of Word2Vec. fastText builds word embeddings by utilizing words, but it goes one step beyond ๐. It is composed of characters instead of words.

Have a look at the official paper:

fastText breaks words down into `n-grams`

instead of sending them directly into the neural network as individual words (sub-words).

๐For example, The word **"there"** has the following character trigram(3-grams):

In order to differentiate between the ngram of a word and the word itself, the boundary symbols `<`

and `>`

are added.

๐If the word **' her'** is part of the vocabulary, it is represented as **< her >**. Thus, ngrams maintain the meaning of shorter words. Suffixes and prefixes can also be interpreted in this way.

There are two primary benefits ๐คฉ of fastText. First, **generalization is feasible** as long as new words have the same characters as existing ones. Second, **less training data** is required since each piece of text may be analyzed๐ for more information. fastText model is pre-trained for more languages than any other embedding technique.

A state-of-the-art pre-trained model, ElMo embedding, has been created by Allen NLP and is accessible on Tensorflow Hub.

NLP scientists ๐จ๐ฌ across the world have started using ELMo (Embeddings from Language Models) for activities in both research and industry. This is done by learning ELMo embeddings from the internal state of a bidirectional LSTM. Various NLP tests have demonstrated that it outperforms ๐ other pre-trained word embeddings like Word2Vec and GloVe. Thus, as a vector or embedding, ELMo uses a different approach ๐ก to represent words.

For further insight, you can check out the official paper here:

ELMo does not employ a dictionary of words ๐ and associated vectors but instead analyses ๐ words in their context. It expresses embeddings for a word by understanding the accurate phrase that contains that word, unlike Glove and Word2Vec. Due to the context-sensitive nature of ELMo embeddings, different embeddings can be generated for the same word in various phrases .

๐For instance, consider these two sentences:

`Watch`

is employed as a verb in the first sentence but as a noun in the second. Polysemous words are those words that appear in diverse contexts ๐ in different phrases. GloVe and fastText are unable to handle words of this kind. On the other hand, ELMo is equipped to handle polysemous words ๐คฉ.

BERT (Bidirectional Encoder Representations from Transformers) is a machine learning ๐คapproach based on transformers for pre-training in natural language processing. Jacob Devlin and his Google colleagues built and released BERT in 2018. Showing state-of-the-art results in an array of NLP tasks has gained significance ๐ in the machine learning field.

Interested in learning more? You may read the official documentation here!

Technically, BERT's breakthrough is the application of Transformer's bidirectional ๐training to language modeling. Using Masked LM (MLM), the researchers could train models in a bidirectional fashion that was previously unachievable. A text sequence was analyzed from `left`

to `right`

or from `right`

to `left`

, respectively, in earlier efforts. Using bidirectional training, the study shows that a model's understanding of linguistic context and flow may be enhanced ๐ฏ over single-direction models.

The left and right contexts must be considered before comprehending the meaning of the sentence. BERT accomplishes exactly that! ๐คฉ

Half of BERT's success comes from its pre-training stage. The reason for this is that the models become more sophisticated as they are trained on a massive corpus of text ๐.

Originally, there are two different versions of the BERT in English:

BERT BASE:

`12`

encoders with`12`

bidirectional self-attention headsBERT LARGE:

`24`

encoders with`16`

bidirectional self-attention heads

Thus now, NLP problems consist of only two steps:

An unlabeled, huge text corpus can be used to train a language model.

Use the vast knowledge base that this model has accumulated and fine-tune it to particular NLP tasks.

With an understanding of both left and the right context, BERT is used to pre-train deep bidirectional representations from unlabeled text to improve performance ๐. Adding a single extra output layer to the previously trained BERT model allows it to be fine-tuned for a wide range of Natural Language Processing applications.

This is the final destination! ๐ค Knowing these pre-trained word embeddings will certainly give you an edge in the field of NLP ๐. In terms of text representation, pre-trained word embeddings tend to capture both the semantic and syntactic meaning of words.

*Also read* - What is React JS

We hope you liked the blog and found this useful! Also check out our YouTube channel: for more great videos and projects.

]]>Hey Coders! ๐ Solving a problem is happiness, right? ๐ค What if we not only find a way to solve the problem but also understand, analyze, and improvise it? I call it double happiness๐ค๐ค.

The problems we are gonna solve today are asked in the top tech company interviews like *Google, Facebook*, and *Microsoft*. Why wait? Lets get started with a widespread **programming problem**, the maximum subarray sum๐.

The maximum subarray sum is the largest sum obtained by contiguous elements in an array. If were given an array of integers and asked to find the subarray with the maximum sum, we use Kadanes algorithm.

Why not figuring out if we really need an algorithm to find the subarray with maximum sum.
The first thought that pops up by looking at the question might be, considering the entire array.

Case 1:- Sum of the complete array:- This method doesnt work because the array contains negative numbers as well.

It seems there is a problem with negative numbers๐ค . Then lets try ignoring all the negative numbers.

Case 2:- Considering the longest subarray containing only positive numbers.

Oh no! ๐ This isnt working. It must be so, as we ignored the larger number(s) present after the negative number.

How about finding all the possible subarrays ( brute force approach) to obtain the maximum sum? It definitely works ๐. But it's time-consuming ( time complexity would be O(n2) ). Implementing Kadanes algorithm would be a better idea๐ก; let's see how it works.

**Working of Kadane's algorithm:-**

If the given array is:-

The subarray with maximum sum is:-

Kadane's algorithm aims to look for all contiguous segments of the array with a positive-sum and keeps track๐ of the maximum sum among all positive sums.

Let's understand the approach in detail.

Kadane's algorithm uses two variables to track the current sum and maximum sum.

- 1.For current sum:- We start from 0 and keep adding array elements one by one from left to right ( it's the sum of elements till that index ). If the value becomes negative, we ignore all the elements till that index and make the current sum zero.

Here if we consider [ 7, -13] as their sum is negative, they will reduce the overall sum of the subarray in which they are included. Hence we avoid such segments and assign the sum to 0.

- 2.The maximum sum stores the maximum value of all the possible sums. ( it's updated when we encounter greater sum value).

**Approach:-**

Kadane's algorithm uses an iterative ๐ dynamic programming approach.
Although there is more about dynamic programming, we can understand it as the process of remembering the results of simpler subproblem to save computation time later.

**Kadane's algorithm pseudo-code:-**

*Initialize the current sum to 0 and the max sum either to INT_MIN or the first element of the array.Iterate through the array*

*Add each array element to the current sum.**Update max sum if the current sum is greater than the max sum.**Make sure the current sum is non-negative (assign to 0 if it becomes negative ).*

**The drawback of Kadanes algorithm:-** Kadane's algorithm requires at least one positive number, so a complete negative array is an invalid input.

How do we overcome this

After understanding Kadanes algorithm, overcoming this drawback is a cakewalk๐๐ถ.

If the array contains only negative numbers, then the lowest negative number is the answer.

In order to check if the array is all negative, we maintain a flag ๐ฉ. On encounter of a positive number, the flag becomes false. If the flag remains true till the end, we will return the lowest negative number.

Here is the code ๐ that implements the above ๐ logic.

**Python Code:-**

```
###Improvised version of Kadane's algorithm for maximum subarray sum
def maxSubArraySum(array , size):
max_sum = array[0]
current_sum = 0
only_negative = True
negative_result = array[0]
for i in range(0, size):
current_sum = current_sum + array[i]
if current_sum < 0:
if(only_negative):
negative_result = max(current_sum, negative_result)
current_sum = 0
else:
only_negative = False
max_sum = max(current_sum, max_sum)
return max_sum
arr=list(map(int,input().split()))
n=len(arr)
print(maxSubArraySum(arr,n))
```

`Time Complexity:- O(n)`

`Space Complexity:- O(1)`

Sample Input:-

`7 -13 4 6 22 -3 9 -8`

Sample Output:-

`38`

Kadane's algorithm uses two variables, one to store the current non-negative sum and the other to store the maximum sum till that point (index).

Let's look at an example and sift through each iteration ๐.

The maximum sum is updated by comparing it with all the positive values of the current sum. After reaching the end of the array, we return the maximum sum.

**Conclusion:-**

As Kadane's algorithm is a simple method that tracks the maximum at each position using the previously obtained value, we can find the maximum subarray sum with O(n) runtime.

Wanna search๐ for patterns in a text? Wanna find all the occurrences of the pattern? Scroll down ๐.

We use Knuth-Morris-Pratt (KMP) algorithm to solve a classical problem, the string matching problem. String matching algorithms play a crucial role in solving real-world๐ problems. Some of its masterful applications are bioinformatics and DNA sequencing๐งฌ, plagiarism detection, spell checkers, spam filtering, information retrieval systems๐ฅ, etc.

**Why KMP algorithm?**

KMP algorithm is the first linear time complexity algorithm for string matching. Hence it's preferred when the search string is larger or for processing large files๐.

The KMP algorithm guarantees linear worst-case complexity ( fault resistant for accidental inputs).

To better understand the importance of the KMP, we will first look at the brute force approach.

**Brute force approach:-**

Brute-force string matching compares the pattern with all substrings of the text. Those comparisons between substring and pattern proceed character by character.

In the brute-force approach, a window with the same length as a pattern moves over the text. Though this algorithm requires no preprocessing and extra space, too many comparisons make it slow๐ฆฅ, thereby making the worst-case time complexity O(m(n-m+1)).
( As the total number of comparisons are n-m+1 and each comparison checks for a pattern match, which is of length m, Hence the complexity turns out to be m(n-m+1) )

We are going to reduce the complexity from O(mn) to O(n) using the KMP algorithm, exciting right๐? Let's proceed.

**Working of KMP algorithm:-**

**The idea of KMP algorithm:-**

The KMP algorithm reduces the runtime by figuring out the text parts known to match, which helps skip a few comparisons.

KMP algorithm consists of 2 parts:-

i. Preprocessing:-

We generate a prefix array to find matches within the given pattern. Here, finding a match means checking if the characters match the pattern's prefix.

Let's understand how to generate a prefix array with an example.

Initially, the prefix array starts with 0, so for A at 1st position, the value is 0. Coming to the 2nd position, we just carry forward the previous prefix value since there is no match. And for the 3rd position, we not only carry forward the previous value but also increment it as it matches with the prefix ( as the 3rd character matches with the 1st )

Basically, the prefix arrays store the number of characters that match the pattern's prefix till that point. This analysis of the pattern before processing helps to reduce iterations.

ii. Searching:-

Searching๐ is to find the occurrences of the pattern in the text. The advantage of KMP is that we can skip a few iterations by using data in the prefix array.

Here's a diagram to show the flow of searching in the KMP algorithm

Meaning of the pattern pointer ACC to prefix array:- For instance, if we are at index i of the pattern, we assign prefixArray[i-1] to the pattern pointer.

Don't worry if all of this seems complicated๐
. Let's understand by considering the example where the pattern is "ABA" and the text is "ABAABA." Here, we start by comparing the first three characters and find that it is a perfect match.

Since we already know that the 3rd character of the text and the 1st character of the pattern are the same ( both are 'A' ). Instead of comparing them again, we start comparing the next characters๐.

KMP algorithm pseudo-code:-

*i. Preprocessing*

*Start the prefix array with 0.**Store the number of matches in the pattern with its prefix.*

ii. *Searching*

*If a match occurs increment text, pattern pointers**If the pattern reaches the end*

-*Count occurrence and change pattern pointer according to prefix array.**Else, if a mismatch occurred*

-*If the pattern pointer is not in the beginning, change it according to the prefix array.*

-*If the pattern pointer is in the beginning, increment the text pointer.*

For a clear and complete idea, lets reconsider the pattern ABA whose prefix array is [0, 0, 1] in a text.

We keep incrementing the text and pattern pointers as the characters are matching. Once we reach the end of the pattern๐, since the pattern is found, we increment occurrence and skip one iteration ๐ ( as prefixArray[2] = 1 ).

A mismatch occurred at the 2nd position. We assign the pattern pointer to the previous prefix array value ( patternPointer = prefixArray[0] ) . As the new pattern pointer value is 0, we cant skip any iterations๐.

We increase the occurrence count by one as the pattern is found.
Once we reach the end of the text, we stop๐ iterating. And return the number of occurrences (return 2).

**Approach:- **

KMP works based on the degenerating property of the pattern. Degenerating property means the detection of multiple occurrences of a pattern in a text. By using this property, this algorithm will help to ๐ต all the occurrences of the pattern. Here partial matching doesnt count; the pattern should be a perfect match๐ ( the entire pattern should be present in the string ).

In preprocessing, the analysis of the pattern is kept ready, which is helpful to reduce iterations while searching.

**Python Code:-**

```
### python3 code for KMP algorithm
text = input()
pattern = input()
m = len(pattern)
n = len(text)
prefixArray = [0 for i in range(m)]
def preprocessing(pattern, n, prefixArray):
i = 1
patternPointer = 0
while (i < n):
if pattern[i] == pattern[patternPointer]:
patternPointer += 1
prefixArray[i] = patternPointer
i += 1
else:
if patternPointer != 0:
patternPointer = prefixArray[patternPointer-1]
else:
prefixArray[i] = 0
i += 1
def searching(pattern,text):
textPointer = 0
patternPointer = 0
occur_count = 0
preprocessing(pattern, m, prefixArray)
while (textPointer < n):
if text[textPointer] == pattern[patternPointer]:
textPointer += 1
patternPointer += 1
if (patternPointer == m):
occur_count += 1
patternPointer = prefixArray[patternPointer - 1]
elif (textPointer < n and text[textPointer] != pattern[patternPointer]):
if (patternPointer != 0):
patternPointer = prefixArray[patternPointer - 1]
else:
textPointer += 1
return occur_count
print(searching(pattern,text))
```

`Time Complexity:- O(n)`

`Space Complexity:- O(m)`

Sample Input:-

`abaaba`

`aba`

Sample Output:-

`2`

The KMP algorithm uses preprocessing to find all the pattern occurrences in the text in linear time.

**Conclusion:-**

KMP algorithm is one of the most widely used search algorithms due to its worst-case fault-resistant nature and efficiency in handling large files.

Yay! We understood a challenging algorithm๐. I guess mastering the upcoming, quick select algorithm would be effortless๐.

We use the quick select algorithm to find the kth smallest element in an unordered list. Here the kth smallest element is the element present at the kth position after sorting the array.

To show you what I mean, here is an example:

Array = [ 4, 7, 2 ] here the 1st smallest is 2, 2nd smallest is 4, and 3rd smallest is 7.

The quick select algorithm is similar to the Quicksort algorithm. Before we jump into the quick select algorithm, lets take a glance at quicksort.

**Quicksort:-** As the name suggests, quicksort is significantly faster in practice than other O(nlogn) algorithms. Quicksort is a divide-and-conquer algorithm. It separates the list of elements into two parts and then sorts them recursively.

The separation happens based on the value of 'pivot' ( the comparison measure ), in such a way that all the elements on the left side are less than the pivot, and on the right side, elements are greater than the pivot.

**Approach:-**

The quick select algorithm takes its roots from the quicksort algorithm. It uses the same divide and conquer technique but selects and iterates only through the subarray containing the kth smallest element.

Importance of quick select algorithm:-p

Like quicksort, quick select is fast in practice. ( though the worst-case performance is poor ๐ข).

It's an in-place algorithm ( doesnt use extra space )

- Quicksort is used in solving problems like:-

- Finding the median.

- Finding kth minimum or maximum element.

**Pseudo-code:-**

*Pivoting:- Bring the pivot to its appropriate position.**Select the part that contains the kth smallest element.*-*Select left if the index of pivot is greater than k.*

-*Else select right.**Repeat these steps till we find the kth smallest element.*

Quick select repeats๐ the two steps until the kth element is found. They are swapping ( placing pivot at correct location ), partitioning ( choosing one side of pivot element ).

*Example of Quick select algorithm implementation.*

This step is called pivoting. (placing pivot element at correct position)

We select the left part of pivot (left part of 18) as pivot index ( 6 ) is greater than k-1 ( 3 ).

After placing the pivot element ( 3 ) at the correct position ( by swapping with 7 ), we select the right part of the pivot (as pivot index (0) is less than k-1 (3) ). Again we assign the last element to pivot.

Now, we place the new pivot element (7) at the appropriate position. Since the pivot index is placed at the kth position ( at 4th position ), we return the pivot element (7 is the answer).

**Problem Statement:-** Given an array of integers, find the kth smallest element.

**The idea of the quick select algorithm:-**

Instead of recurring for both sides (after finding pivot), the quick select algorithm recurs only for the part that contains the kth smallest element. The logic is simple: We recur for the left part if the partitioned element index is more than k. If the index is k, we return ( found kth element). If the index is less than k, then we recur for the right part.

**Python Code:-**

```
### Python3 program of quick select
def partition(arr, l, r):
pivot = arr[r]
pivotIndex = l
for j in range(l, r):
if arr[j] <= pivot:
arr[pivotIndex], arr[j] = arr[j], arr[pivotIndex]
pivotIndex += 1
arr[pivotIndex], arr[r] = arr[r], arr[pivotIndex]
return pivotIndex
def kthSmallest(arr, l, r, k):
if (k > 0 and k <= r - l + 1):
pivotIndex = partition(arr, l, r)
if (pivotIndex - l == k - 1):
return arr[pivotIndex]
if (pivotIndex - l > k - 1):
return kthSmallest(arr, l, pivotIndex - 1, k)
return kthSmallest(arr, pivotIndex + 1, r, k - pivotIndex + l - 1)
print("Index out of bound")
arr = list(map(int,input().split()))
n = len(arr)
k = int(input())
print(kthSmallest(arr, 0, n - 1, k))
```

`Time Complexity:-`

`Average:- O(n)`

`Worst:- O(n ^2)`

`Space Complexity:- O(1)`

Sample Input:-

`7 13 4 9 24 6 3 18`

`4`

Sample Output:-

`7`

The recurrence relation of quicksort is `T(n) = n + 2T(n/2)`

making its complexity O(n logn). While that of quick select is `T(n) = n + T(n/2)`

.

**Conclusion:-**

The quick select algorithm uses a linear number of comparisons (on average) thus is more efficient than the sorting method. Selection after pivoting makes the quick select algorithm faster and more efficient than quicksort.

*Dijkstra's vs A*Search Algorithms*

I hope you found the blog helpful. Keep learning with us ๐จ๐๐ฉ๐. Check out our latest videos and projects on our YouTube channel.

]]>๐ก **Data structures** provide a means to efficiently manage **large amounts** ๐ฎ of data for various scenarios, such as large databases and internet indexing services. Usually, efficient data structures are essential in designing dynamic algorithms. Data structures are used to organize the storage and retrieval of information stored in both main memory and secondary memory. Data structures have a broad and diverse scope ๐คฏ of usage across **Computer Science** and **Software Engineering**. Data structures have found their use in almost every program or software system that has been developed. ๐ญ Hence as developers, one must have good knowledge about data structures. On the other hand, an algorithm is a collection of steps to solve a particular problem.

๐ก An algorithm is a step-by-step method that defines instructions to be executed in a specific order to get the desired output. Algorithms are self-sufficient and remain independent from underlying languages, i.e. the same algorithm can be performed in more than one programming language ๐จ๐ป.

source: 123RF

From the data structure point of view, given below are some significant categories of algorithms

- ๐
**Sort:**Sorting items in a specific order. - ๐
**Search:**Searching an item in a data structure. - ๐
**Update:**Updating an existing item in a data structure. - ๐
**Insert:**Inserting an item in a data structure. - ๐
**Delete:**Deleting an existing item from a data structure.

Not all styles or functions in a program can be called an algorithm. An algorithm should have the following features

**Unambiguous:**The algorithm should be straightforward. Each of its phases or steps and inputs/outputs should be clear and lead to only one meaning.**Finiteness:**An algorithm must terminate after a finite number of steps.**Input:**An algorithm should have 0 or more well-defined inputs.**Output**: An algorithm should have one or more well-defined outputs and should match the desired result.**Independent:**An algorithm should have step-by-step commands, which should be independent of any programming code.**Feasibility:**Algorithms must be feasible with the available resources.

Given the breadth of the topics in data structures and algorithms, it is often a **daunting task** ๐ฅบ for many to decide how and from where to start studying DSA.

Knowing that data structures and algorithms are the base for every program, let us learn how we can start with our knowledge journey of data structures and algorithms.

The first and foremost step is to start with learning the basic data structures such as **Arrays, Linked Lists, Stacks, and Queues**. Then, begin with **searching and sorting algorithms** such as Binary Search and Bubble Sort as they are the most prominent types of algorithms which will be used in the coming advanced topics in data structures and algorithms. Here is a roadmap on which topics to learn:

An array is a structure of **fixed size**, which can hold items ๐ of the same data type. It can be an array of integers, floating-point numbers, strings, or even an array of arrays (for example, 2-dimensional arrays). **Random access** is possible since arrays are indexed, meaning that an element in an array can be accessed by using a specific index for each element. As usual, the index starts from 0.

*Applications of arrays*

They are used as the building blocks to build other data structures such as array lists, heaps, hash tables, vectors, and matrices.

Used for different sorting algorithms such as insertion sort, quick sort, bubble sort, and merge sort.

A linked list is a **consecutive structure** consisting of a sequence of linear order items linked to each other. Linked lists provide a flexible and straightforward illustration of **dynamic sets**. Hence, you have to obtain data sequentially, and random access is not possible. Elements contained in a linked list are called **nodes**. Every node contains a key and a pointer to its successor node, known as next , and the attribute defined as "head" points to the first element of the linked list. The last element of a linked list is known as the tail.

*Applications of linked lists*

They are used for implementing stack and queue.

They are used in switching between programs using Alt + Tab (implemented using Circular Linked List).

A stack ๐ is a **LIFO** (Last In First Out the element placed at last can be accessed at first) structure, commonly found in many programming languages. This structure is named as stack because it resembles a real-world stack a **stack of plates.**

A queue ๐ง๐พ๐ค๐ง๐ผ๐ง๐พ๐ค๐ง๐ผ๐ง๐พ๐ค๐ง๐ผ is a **FIFO** (First In First Out the element placed at first can be accessed at first) structure commonly found in many programming languages. This structure is called queue because it resembles a real-world queue **people waiting in a queue.
**

*Applications of stacks*

Used for expression evaluation (e.g., the shunting-yard algorithm for parsing and evaluating mathematical ๐งฎ expressions).

Used to implement function calls in recursion programming.

*Applications of queues*

Used to handle threads in multithreading.

It is used to implement queuing systems (e.g., priority queues).

๐A Hash Table is a data structure that contains values that have keys ๐ associated with them. It is very effective in searching and inserting, regardless of the size of the data. Moreover, it supports item lookup efficiently if we know the key related to the value.

*Applications of hash tables*

They are used to implement database ๐ indexes.

Used to implement associative arrays.

Used to implement the set data structure.

A tree ๐ณ is a **hierarchical structure** where data is arranged hierarchically and are linked together. This structure is distinct from a linked list, whereas items are connected in a linear order in a linked list. Various trees have been developed throughout the past decades to suit specific applications and meet certain limitations. Few examples are binary search trees, red-black trees, B trees, AVL trees, splay trees, and n-ary trees.

*Applications of trees*

Binary Search Tree: These are used in many search applications where data is constantly entering and leaving.

Binary Trees: These are used to execute expression solvers and expression parsers.

Searching algorithms are of utmost importance ๐ as these are the class of algorithms that allow us to efficiently retrieve information from various data structures. These mainly include, **Linear Search, Binary Search, and Interpolation Search**.

Sorting algorithms are another important category of algorithms as they allow us to **sort data** in a particular order for efficient functioning. Also, sorted data is a prerequisite for other algorithms such as Binary Search.

The process in which a **function calls itself** ๐ indirectly or directly is called recursion, and the corresponding function is called a **recursive function**. A lot of problems can be solved quite easily by using recursive functions.

source: dev.to

Backtracking is an algorithmic technique used to solve problems recursively by building a solution incrementally, one piece 1 at a time. It does this by removing those solutions that fail to satisfy the problem's constraints at any given point of time (by time, we mean the time passed till reaching any level of the search tree).

You can either choose to learn from videos on YouTube, start formal training by enrolling ๐จ๐ซ in a course from websites such as **AI Probably**, or by reading books. Besides, you can also check out coding platforms such as:

You can also check out these popular books ๐ฐ for DSA:

- ๐
**Introduction to Algorithms :**Thomas H. Cornen - ๐
**Cracking the Coding Interview :**Gayle Laakmann McDowell - ๐
**Algorithms :**Sedgwick - ๐
**Data Structures and Algorithms Made Easy :**Narasimha Karumanchi - ๐
**The Art of Computer Programming :**Donald E. Knuth

Other than these, you can also enroll in this course on **DSA: Data Structures and Algorithms with Python by AI Probably** ๐. This course takes you from the very **basics** of data structures and algorithms to an **advanced** ๐ level in just a few weeks. Also, the course is going to be live soon!

Also read - Data Structure & Algorithms interview questions

Most of the other courses you would encounter while searching for a course on DSA would be in JAVA or C++ language. But considering the popularity ๐ and ease of use of Python, this course will teach you all the things you need to know about DSA in Python, from data structures like arrays, trees, and heaps to searching and sorting algorithms to multithreading. This comprehensive course is all you need to become a master of Data Structures and Algorithms!

]]>Today, we take a look at what events are and how the browser or the website reacts to some of the most common events with the help of **event listeners** developed by the programmers.๐จ๐ป

So, let's get started!

**In this blog, I will try to cover the following topics:**

|-- Events

|-- Types of events

| -- onchange event

| -- onclick event

| -- onload and onunload event

| -- Mouse events

| -- Event Listeners

|-- Event Listener Methods

| -- addEventListener()

| -- removeEventListener()

Events are actions or occurrences that happen when the user tries to manipulate a webpage, the system tells us about the event, and we can respond to it accordingly.

๐ก For example, **loading a webpage**, when the user **clicks a button**, when we close a window (), when we **press** a key or **resize a window**, are all one or the other form of events.

๐ These events can be used to execute JavaScript responses, which causes the buttons to close the window, data to be validated, messages to be displayed, or any other type of response imaginable by the developer. ๐จ๐ป

๐ฐ The most common types of events in JavaScript Include:

๐ The `onchange`

event is frequently used in **conjunction** with the **input field validation**. When the value of an **element** is **modified**, the **onchange** event is **triggered**.

**Example:**

```
Enter your name: <input type="text" id="Name" onchange="upperCase()">
// Registering an event
<script>
function upperCase() { // function to be triggered
const name = document.getElementById("Name"); //getting element
name.value = name.value.toUpperCase(); // Changing the name to uppercase
}
</script>
```

๐ก Whenever the user **changes the content** of the input field, the `uppercase()`

function will be called.

Note: The function will convert everything into uppercase on changing the content inside the input

๐ The `onclick`

event occurs whenever the user **clicks** on the HTML elements.

**Example: **

```
<h2 onclick="this.innerHTML='Welcome To AI Probably'">AI Probably</h2>
```

๐ก In this example, when the user clicks on the `AI Probably`

element, the text will change to **Welcome To AI Probably.**

๐ฐ A better example for your understanding would be that the window gets closed by clicking the close button() of a website window. This event triggers an **event handler** or a function that will close the window as soon as the close button is clicked.

๐ When a user enters or leaves a page, the `onload`

and `onunload`

events are fired respectively.

The

`onload`

event can be used to determine the visitor's browser type and version and then load the appropriate version of the web page based on that data. Cookies can be handled using the`onload`

and`onunload`

events.

```
<body onload="load()"> // When page is loaded
function load() { // function gets triggered on loading
alert("Welcome to AI Probably");
}
```

๐ก As soon as the webpage is loaded an alert message would pop up with the message **Welcome to AI Probably.**

๐ As soon as you close the browser window, change your browser page, or reload your browser page, the `onunload`

method is **triggered.**

**Example:**

```
<script>
window.onunload = (event) => {
alert("Thank you for visiting AI Probably");
};
// gets triggered on leaving the page,or reloading it.
</script>
```

๐ก Here when the user **unloads** the window, the `onunload`

method gets triggered and an alert message pops up.

Note: Its a good practice to use

`addEventListener()`

to register the unload event.

The

`onmouseover`

event occurs when the user mouses over an HTML element, whereas the`onmouseout`

event is triggered when the user mouses away from the HTML element.

**NOTE:** Both these events are generally used **together**.

**Example:**

```
<div onmouseover="moveOver(this)" onmouseout="moveOut(this)"
style="background-color:skyblue;width:160px;height:20px;padding:40px;">
Mouse Over Me</div> // events are called on a styled box
function moveOver(obj) { // When mouse moves over HTML element
obj.innerHTML = "Welcome to AI Probably"
}
function moveOut(obj) { // when mouse moves away from HTML element
obj.innerHTML = "Thanks for visiting"
}
```

๐ก Here, initially, before any mouse activity, we see a **box** with the text **Mouse Over Me.** As soon as the user moves the mouse over the box, the `onmouseover`

event is triggered, and the control is given to the `moveOver()`

function, displaying a new text **Welcome to AI Probably. **

๐ก When the user moves the cursor away from the box, the `onmouseout`

event is triggered, and the control is given to `moveOut()`

function, displaying a new text,** Thanks for visiting.**

Note: The

Mouse Over Metext is only displayed for thefirst time, after which the control keeps switching between the`moveOver`

and`moveOut`

functions.

๐ The `onmousedown`

and the `onmouseup`

events occur once the **user clicks** on the HTML element. First, on **clicking** the HTML elements, the `onmousedown`

event is triggered. After that, when the user **releases** the mouse over the HTML element, the `onmouseup`

event is triggered.

**Example:**

```
<div onmousedown="mouseDown(this)" onmouseup="mouseUp(this)"
style="background-color:green;width:100px;height:20px;padding:40px;">
Click Me</div> //Events are called on a styled box
function mouseDown(obj) { // occurs when user clicks on the box
obj.style.backgroundColor = "skyblue";
obj.innerHTML = "Welcome to AI Probably";
}
function mouseUp(obj) { //occurs when user releases the cursor
obj.style.backgroundColor="yellow";
obj.innerHTML="AI Probably";
}
```

๐ก Here, notice that when the page is loaded, you see a text **Click Me** with green background. As soon as the user **clicks** on the box, the `mousedown()`

function is triggered, and when the user** releases** the mouse, the `mouseup()`

function is triggered.

Note: Similar to

`onmouseover`

and`onmouseout`

events, both these events`onmousedown`

and`onmouseout`

are generally usedtogether.

๐ The `onfocus`

and `onblur`

events are generally used to **interact with forms.** `onfocus`

event occurs when the user **clicks** on the form field while the `onblur`

event occurs when you click **outside **of the field.

**Example:**

```
<style>
.incorrect { border-color: yellow; }
#errorMsg { color: red }
</style>
Email Id: <input type="email" id="input"> // a form input email field
<div id="errorMsg"></div>
<script>
input.onblur = function() { // when you click outside the field
if (!input.value.includes('@')) {
input.classList.add('incorrect');
errorMsg.innerHTML = 'Please enter a valid email.'
}
};
input.onfocus = function() { // when you click inside the field
if (this.classList.contains('incorrect')) {
this.classList.remove('incorrect');
errorMsg.innerHTML = "";
}
};
</script>
```

๐ก Here, once the user **clicks** inside the **email** id **box**, the `onfocus`

event is triggered, and the focus is given to that particular field, but as soon as the user clicks **outside** the field, the `onblur`

event gets triggered, and subsequently, the border of the box **changes** its **color** to yellow and a message requesting the user to enter a valid email is displayed which disappears as soon as the user clicks back inside the field and the focus shifts to the field again.

Note:

`onfocus`

and`onblur`

events are considered the opposite of each other.

The above mentioned events are the most common types of events that a developer uses to maintain and manipulate its web pages. ๐จ๐ป

๐ Now lets discuss **Event Listeners**. Each possible event has an **event handler**, which is a piece of code (often a JavaScript function created by you as a programmer) that executes when the **event occurs**.

๐ We say we've registered an event handler when such a block of code is defined to run in **response** to an **event**, and these event handlers are also called **event listeners.**

The

`EventListener`

interface denotes an object that can respond to an event that has been dispatched by an`EventTarget`

object.`EventListener`

accepts both a function and an object with a`handleEvent()`

property function

`EventTarget`

is aDOM interfaceimplemented by objects that can receive events and may have listeners for them. The most common event targets areElement,Document, andWindow.

**Example: **

```
EventListener.handleEvent()
```

๐ก Here is a function that gets triggered whenever the specified event occurs.

๐ Now lets take a look at some `EventListener`

methods that enable the user to either **register** an **event handler** of a specific event type on the `EventTarget`

or remove an event listener from the `EventTarget`

using the `addEventListener()`

and `removeEventListener()`

methods, respectively.

๐ The `addEventListener()`

method attaches an event handler to the specified element without overwriting any existing event handlers.

**Syntax:**

```
EventTarget.addEventListener(event, function, useCapture);
```

- ๐ก
`Event`

- Type of event i.e. , click,mouseover etc. - ๐ก
`Function`

- we want to call when the event occurs. - ๐ก
`useCapture`

- This is a parameter having a boolean value specifying to use event bubbling or event capturing.

Note:

`useCapture`

is an optional parameter.

**Example:**

```
<button id="Btn">Click Me</button>
<script>
document.getElementById("Btn").addEventListener("click", greeting);
// Registering an EventListener click on the button
function greeting() { // function gets triggered on clicking the button
alert ("Welcome to AI Probably");
}
</script>
```

๐ก Here we add an `EventListener`

**click**. As soon as we click on the button, the click event occurs, and the `greeting()`

function gets triggered, displaying an **alert popup.**

**Example: Adding multiple event handlers to the same element**

๐ Now we add many events to the same element **without overwriting** the existing events.

```
<button id="Btn">Click Me</button>
<script>
var welCome = document.getElementById("Btn");
welCome.addEventListener("click", greetings); // Event one
welCome.addEventListener("click", Appreciation); // Event two
function greetings() { // trigger associated with event one
alert ("Welcome to AI Probably");
}
function Appreciation() { // trigger associated with event two
alert ("Welcome to the future!");
}
</script>
```

๐ก Here first, on **clicking** the button, the `greetings()`

function is triggered, displaying an **alert message**, after which the `Appreciation()`

function is triggered, and a **second alert** message pops up.

๐ We use `removeEventListener()`

method to **delete event handlers** previously registered using the `addEventListeners()`

method.

**Example:**

```
<style>
#add { // styling around the button
background-color: skyblue;
border: 1px solid;
width: 65px;
padding: 20px;
color: white;
font-size: 20px;
}
</style>
<div id="add">
<button onclick="removeHandler()" id="Btn">Remove</button>
// invoking eventHandler function on clicking the button
</div>
<p id="remove"></p>
<script>
document.getElementById("add").addEventListener("mousemove", randNumber); // adding event listener mousemove
function randNumber() { // function is triggered on moving mouse
around the colored button area
document.getElementById("remove").innerHTML = Math.random();
}
function removeHandler() {
document.getElementById("add").removeEventListener("mousemove", myFunction); // Removes the previously added mousemove event listener
}
</script>
```

๐ก Here first, we add a `mousemove`

event handler that displays a **random number** using the `randNumber()`

function every time we move over the **remove button** or in the surrounding **sky-blue-colored** area.

๐ก To remove this event handler, the user will have to click on the button, and the `removeHandler()`

function will get triggered.

With this, we are done with the concept of **Events** and **EventListeners**, but to truly master it and get the hang of these concepts, you should practice these concepts regularly๐ง with your own custom codes.

๐กRemember that events are like your **actions** and event listeners are like the **consequences** of your actions. ๐

I hope you enjoyed this article and found it helpful!

Do check out our article on Introduction to ReactJS

For more such cool blogs and projects, check out our YouTube channel.

]]>Today in this blog, we will discuss two of the most commonly used path-finding algorithms to solve **Graph Traversal Problems**:

- ๐
**Dijkstras Algorithm**to find the**shortest path between any two graph nodes** - ๐
**A* search Algorithm**to find a**path in a game-like scenario**.

๐ก Remember that to a given problem both these approaches might fit; now its up to you to choose the approach you are most confident about. Knowing both methods come in handy, especially when for a given problem, there exist time constraints - one algorithm meets the time constraint while the other doesnt. So, lets get started!

**In this blog, I will try to cover the following topics:**

|-- Dijkstras Algorithm

| -- Basic Concepts

| -- Algorithmic Steps

| -- Implementation in Python

| -- A* Search Algorithm

| -- Basic Concepts

| -- Heuristics

| -- Algorithmic Steps

| -- Implementation in Python

| -- Comparison between these two algorithms

Before we can really dive into **Dijkstra's Famous Algorithm**, we need to gather a few seeds of vital information that we'll need along the way.๐ It's finally time for us to meet the **Weighted Graph!** A **weighted graph** is intriguing since it is unaffected by whether the graph is `directed`

or `undirected`

or whether it contains cycles.

At its most basic level, a

weighted graphis a graph with edges that have some form ofvalue associatedwith them. The weight of an edge is determined by the value attributed to it. The cost or distance between two nodes is a common way of referring to the "weight" of a single edge.

Now, the **weight** of a graph begins to **complicate** things slightly here;๐ง finding the **shortest path** between two nodes becomes considerably more complicated when we have to account for the weights of the edges we are traversing.

Do you believe that finding the shortest path between nodes using the ** Brute-Force Method ** would be Viable, Scalable, and Efficient when dealing with **large trees**, say, 20 nodes The explanation is that it **isn't realistic** nor is it really any fun!๐And thats where **Dijkstra** comes to the rescue.๐ช

**Dijkstra's Algorithm** is **special** for various reasons, which we'll discover as we learn more about how it works.๐ But the fact that this approach isn't merely utilized to identify the shortest path between two specified nodes in a graph data structure has always caught me off guard.๐

Dijkstra's Algorithmcan be used to find theshortest pathbetween any two nodes in a network, as long as the nodes arereachablefrom the beginning node.

๐ This algorithm will **continue** to run until all of the **reachable vertices** in a graph have been visited. This implies we could use Dijkstra's Algorithm to **identify** the shortest path between any two reachable nodes and save the results somewhere.๐ We can run Dijkstras Algorithm **just once **and look up our results from our algorithm again and again, without having to actually run the algorithm itself!๐

The only time we'd need to **re-run** Dijkstra's Algorithm is if something about our graph data structure **changed**, in which case we'd need to do so to **guarantee** that we still have the most **up-to-date** shortest paths for our data structure.

**Path-finding issues**, such as detecting directions or finding a route on **Google Maps**, are the most common **application** of Dijkstra's Algorithm.๐ฉ To identify a path through Google Maps, however, implementation of Dijkstra's Algorithm must be considerably **more clever**, taking into account a weighted graph and **traffic**, **road conditions**, **road closures**, and **construction**.๐ฏ

Alright! Do not worry if all of this seems overwhelming!๐ The following section will undoubtedly ease your mind.๐

Let's consider an **Undirected, Weighted Graph** shown below:

๐Let's assume we're looking for the shortest route between nodes `A`

and `C`

.

๐ We know we'll **start** at node `A`

, but we have no idea if there is a path to get there or if there are several paths! In any scenario, we have no idea which path will be the shortest to go to node `C`

, assuming one exists at all.

Before we go any further, let's go over the process for using Dijkstra's Algorithm.

- ๐ Every time we set out to visit a new node, we will
**prioritize**visiting the node with the**shortest**known distance/cost. - ๐ We'll inspect each of its
**nearby**nodes once we've arrived at the node we're going to visit. - ๐ We'll determine the distance/cost for each
**surrounding**node by**adding**the costs of the edges that lead to the node we're checking from the starting vertex for each neighboring node. - ๐ Finally, if the distance/cost to a node is
**less**than a known distance, the shortest distance we have on file for that vertex will be**updated**.

๐ Now, let's get started with our example!

Tracing our paths now, the exact path that gives us the shortest distance between the nodes`A`

and`C`

is:`A -- E -- D -- C`

.

**Any shortest path** in this table can be found by **retracing** our steps and returning to the initial node by following the **previous vertex** of any node. ๐

Let's suppose we decide that we want to locate the shortest path from point `A`

to point `D`

. We don't need to run Dijkstra's Algorithm again because we already have all we need right here!

Well **start** with node `D`

, then look at node `D`

's **previous** vertex, which happens to be node `E`

. Similarly, well look at node `E`

's previous vertex which is node `A`

, our starting vertex! Once we trace our steps all the way back up to our starting vertex,๐ we can get the results in this order: `A -- E -- D`

. As it turns out, this is the exact path that gives us the lowest cost/distance from node `A`

to node `D`

!๐

```
nodes = ('A', 'B', 'C', 'D', 'E')
distances = {
'B': {'A': 7, 'C': 6, 'D': 3, 'E': 3},
'A': {'B': 7, 'E': 2},
'C': {'B': 6, 'D': 6},
'D': {'B': 3, 'C': 6, 'E': 2},
'E': {'A': 2, 'B': 3, 'D': 2}}
unvisited = {node: None for node in nodes} #using None as +inf
visited = {}
current = 'A'
currentDistance = 0
unvisited[current] = currentDistance
while True:
for neighbour, distance in distances[current].items():
if neighbour not in unvisited:
continue
newDistance = currentDistance + distance
if unvisited[neighbour] is None or unvisited[neighbour] > newDistance:
unvisited[neighbour] = newDistance
visited[current] = currentDistance
del unvisited[current]
if not unvisited:
break
candidates = [node for node in unvisited.items() if node[1]]
current, currentDistance = sorted(candidates, key = lambda x: x[1])[0]
print(visited)
```

Output:

`{'A': 0, 'E': 2, 'D': 4, 'B': 5, 'C': 10}`

The **most important advantage** of the **A* Search Algorithm**, which separates it from other traversal techniques, is that it has a brain!!๐ง This makes A* very smart and pushes it much ahead of other conventional algorithms.๐

Actually, the **A* Search Algorithm** is built on the principles of **Dijkstras Shortest Path Algorithm** itself but provides a **faster solution** ๐ when faced with finding the shortest path between two nodes.

It accomplishes this by incorporating a

heuristicelement that helps select the next node to examine as the path progresses.

Remember you read in the earlier context about the **A* Algorithm having a brain**?๐ง This is it! The A* Algorithm employs a heuristic function to assess and analyze which path to take next.๐ญ The heuristic function calculates the minimum cost between a given node and the target node.

The algorithm will finally combine the

actual costfrom the start node -`g(n)`

- with theanticipated costto the destination node -`h(n)`

- and use the result to decide which node to examine next.

(Told you, its a simple addition expression!)๐

To find the best solution, you might have to use different heuristic functions according to the type of the problem.๐ฒ **A* heuristic function, in essence, helps algorithms in making the right decision faster and more efficiently. ๐ In fact, a heuristic function's core qualities ๐ฐ are Admissibility and Consistency.**

However, **creating these functions** is a difficult task, and this is the **fundamental problem** we face in A* Algorithms.๐ฉ

But again, don't worry if all of this seems overwhelming too!๐ Lets move on to the following section. It will undoubtedly ease your mind. ๐

๐ Now that you know more about this algorithm let's see how it works behind the scenes with a **step-by-step** example. We will be using the same **Undirected, Weighted Graph** that we used for Dijkstras Algorithm.

Tracing our paths now, the exact path that gives us the shortest distance between the nodes A and C is:`A -- E -- D -- C`

.

```
def aStarAlgo(start_node, stop_node):
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {} #contains an adjacency map of all nodes
g[start_node] = 0
parents[start_node] = start_node
while len(open_set) > 0:
n = None #node with lowest f() is found
for v in open_set:
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n = v
if n == stop_node or Graph_nodes[n] == None:
pass
else:
for (m, weight) in get_neighbors(n):
if m not in open_set and m not in closed_set:
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
open_set.remove(n)
closed_set.add(n)
print('Path does not exist!')
return None
#function to return neighbor and its distance from the passed node
def get_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None
#for simplicity we consider heuristic distances given and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist = {
'A': 0,
'B': 7,
'C': 2,
'D': 5,
'E': 2,
}
return H_dist[n]
#Describe your graph here
Graph_nodes = {
'B': [('A', 7), ('C', 6), ('D', 3), ('E', 3)],
'A': [('B', 7), ('E', 2)],
'C': [('B', 6), ('D', 6)],
'D': [('B', 3), ('C', 6), ('E', 2)],
'E': [('A', 2), ('B', 3), ('D', 2)]}
aStarAlgo('A', 'C')
```

Output: Path found: ['A', 'E', 'D', 'C']

Coming towards the end, let us now get to **compare** both of these super-powerful algorithms that we just studied!๐

The first basis of comparison is the **Time Complexities**.

Dijkstras Algorithm | A* Search Algorithm | Description | |

Worst Time Complexity | O(E log V) | O(V) | V is the number of vertices |

E is the total number of edges |

Note: The time complexity of the A* Algorithm depends heavily on the heuristic.

**Firstly, we will talk about Dijkstras algorithm.**

Dijkstra's key advantage is that it uses an **uninformed algorithm**.๐ช This means it doesn't need to be informed of the destination node ahead of time.

You can utilize this functionality when you don't know anything about the graph and can't estimate the distance between each node and the goal.

When you have several target nodes, it comes in handy. Dijkstra often covers a vast portion of the graph since it selects the edges with the lowest cost at each step. This is particularly useful when you have numerous target nodes and aren't sure which one is the closest.

**Now comes the A* algorithm.**

If you ask what the **best 2-D path-finder algorithm** is, then the answer is **A* Algorithm**! ๐ It is an intelligent algorithm that separates it from the other conventional algorithms.

People choose A* over Dijkstras Algorithm as Dijkstras Algorithm fails on the negative edge weights. ๐ Additionally, since it does a blind search, it wastes a lot of time while processing!๐

๐ But, as every coin has two sides, people prefer Dijkstra's Algorithm in some circumstances since A* cannot be applied to graphs with many target nodes

This is because it has to be executed multiple times (one for each target node) to reach all of them.๐

**Another main drawback of the A* Algorithm is the memory requirement**.

It keeps all generated nodes in the memory, so it is not practical for various large-scale problems.

๐บ Another greatest challenge in selecting A* is the need for a good heuristic function. The time it takes to provide the heuristic must not cancel out any time savings in the process of path-finding. ๐

Thats it! Now you know how these algorithms work behind the scenes and which one you would choose depending on the application.๐

Happy reading!

I hope you enjoyed this article and found it helpful! ๐ฏ For more such cool blogs and projects, check out our YouTube channel.๐

]]>