DataSpoof @dataspoof Channel on Telegram

DataSpoof

@dataspoof


Learn Data Science

https://dataspoof4081.graphy.com/membership

Artificial Intelligence
Machine Learning
Data Science
Deep learning
Computer vision
NLP
Big data

DataSpoof (English)

Are you interested in the fascinating world of Artificial Intelligence, Machine Learning, Data Science, and more? Look no further than DataSpoof! Our Telegram channel is dedicated to all things related to AI, ML, deep learning, computer vision, NLP, and big data. Whether you are a seasoned professional in the field or just starting out, DataSpoof provides valuable insights, resources, and updates to keep you informed and inspired. Stay connected with us on Instagram, LinkedIn, and Twitter to join our growing community of tech enthusiasts. Follow the latest trends, discover new technologies, and engage with like-minded individuals who share your passion for cutting-edge innovations. Don't miss out on this opportunity to expand your knowledge and network with industry experts. Join DataSpoof today and take your understanding of AI and data technologies to the next level!

DataSpoof

17 Feb, 08:26


Data analyst Training

DataSpoof

17 Feb, 08:26


Training Details_data_science.docx

DataSpoof

17 Feb, 08:21


GenAI Curriculum (DataSpoof).pdf

DataSpoof

14 Feb, 14:08


Dm us on whatsapp for real time training
+9183182 38637

These are the following Training we offer

1- Data Science Training (5 months)
2- GenAI Training (40 days)
3- Mlops Training (40 days)
4- Data analyst Training (45 days)
5- Big data Training ( 60 days)

DataSpoof

13 Feb, 14:57


How to perform Inferential statistics in Python

Do watch it like and subscribe to our YouTube channel

Support us our content by subscribing we will upload more free content on data science

https://youtu.be/G-lgNshSmr0?si=P3SSG34nZMHZHOhA

DataSpoof

02 Feb, 13:41


How to perform statistical data analysis in Python.

Do watch it, like and subscribe to our channel

Support our content by subscribing we will upload more free content on data science



https://youtu.be/VJF6qHAl6VQ?si=VTEQvjrDR_Qp4IUy

DataSpoof

29 Jan, 13:00


Complete Exploratory data analysis in python.

Do watch it, like and subscribe to our channel

Support our content by subscribing we will upload more free content on data science

https://youtu.be/CVIBd5x_O9k?si=L6JCi_KaEn-k664c

DataSpoof

28 Jan, 03:46


https://www.instagram.com/reel/DFWqxPsShSn/?igsh=bm1tZzE0dGs1aHpt

Learn how DeepSeekv3 cause stock market to crash

DataSpoof

25 Jan, 13:37


Complete Data Preprocessing video is available on our YouTube channel.

It contains two things
1- Checking the quality of data
2- Doing data cleaning

Steps for checking the quality of data

1- Check the data manually
2- Check for the incorrect data types
3- Check for the spelling errror in the column names
4- Check for the spelling error in the categorical column values
5- Chcek for the negative values in the numerical column
6- Check for the missing values
7- Check for the duplicates values
8- Check for the outliers in the numerical column
9- Check for the data imbalance in the target column
10- Checking for the skeweness in the numerical column
11- Checking for multicollinearity
12- Checking for Cardinality in the categorical columns
13- Encoding the categorical column

Do watch it, like and subscribe to our YouTube channel.
We are aiming for 100 likes on this video. Show your support so that we can keep uploading free content

https://youtu.be/futAzAg99uA?si=NFx1BmSf-6V7xMtr

DataSpoof

24 Jan, 10:58


⭐️Want an open source version of OpenAI's Operator?

There's a great open source project called Browser Use that does similar things (and more) while being open source

Allows you to plug in any model you want

Love to see open source leading the way🚀


https://www.instagram.com/p/DFNKm_JSQUQ/?igsh=eXlodmVwbXdyaTUy

DataSpoof

24 Jan, 08:19


🔥 BREAKING: OpenAI Launches Operator: The Future of AI Automation

OpenAI has introduced Operator, an AI agent that can complete tasks on its own using a web browser. It’s designed to make work easier by handling tasks for you.

Operator is powered by the new Computer-Using Agent (CUA) model. It combines GPT-4o's vision with advanced reasoning, allowing it to see, click, type, and interact with websites just like a person. No special integrations are needed.

DataSpoof

16 Jan, 13:47


How to make real time stock market data processing pipeline using AWS Lambda and kinesis

Complete video is available on YouTube. Like and subscribe to our YouTube channel for such content.

https://youtu.be/CNHvbGNGV1A?si=vecZlS3Fkbk5C4zp

DataSpoof

15 Jan, 12:50


List of 500+AI Agent projects/UseCases

https://github.com/DataSpoof/500-AI-Agents-Projects

DataSpoof

15 Jan, 12:40


AI Agents are about to change everything—and it’s happening now.

Here’s the cheat sheet:
1️⃣ Agentic RAG Routers: Think of them as traffic controllers for your workflows.
2️⃣ Query Planning RAG: Perfect for making tasks super efficient.
3️⃣ Adaptive RAG: Always learning, always improving.
4️⃣ Corrective RAG: Spotting and fixing errors before they derail you.
5️⃣ Self-Reflective RAG: Basically, AI journaling to improve itself.
6️⃣ Speculative RAG: Solving problems before you even know they exist.
7️⃣ Self Route RAG: Dynamic workflow magic.

DataSpoof

10 Jan, 05:30


Everyone knows about LLM aka Large Language model.

Now we will talk about SLM aka Small Language model

As their name implies, SLMs are smaller in scale and scope than large language models.

Some examples of SLM are
- Phi 3.5
- tiny Llama
- mobile Llama
- Gemma2

SLMs can be trained using two main techniques:

Knowledge distillation: A smaller model learns from a larger, already-trained model

Pruning: Extra bits that aren't needed are removed to make the model faster and leaner

Here are some characteristics of SLMs:

Smaller in size: SLMs have fewer parameters than LLMs, often in the tens to hundreds of millions, compared to billions in LLMs.

More efficient: SLMs are more computationally efficient and can run on less powerful hardware.

Faster training: SLMs can be trained and developed faster than LLMs.

Specialized: SLMs are trained on curated data sources and can be specialized in specific tasks.

Fine-tunable: SLMs can be fine-tuned to do exactly what is needed for a specific task.

Cost-effective: SLMs can be more cost-effective than LLMs, making them a good option for integrating intelligent features when resources are limited.

DataSpoof

09 Jan, 02:48


321 real-world gen AI use cases from the world's leading organizations

https://lnkd.in/guSqrxk5

#genai

DataSpoof

08 Jan, 15:27


You can join our whatsapp channel

https://whatsapp.com/channel/0029VaI2tnVFMqrThp6yEp1N

DataSpoof

07 Jan, 13:20


https://youtu.be/w7Z9OUkcVQU?si=mW3SsndHK-mSMQsc

DataSpoof

07 Jan, 05:21


DataSpoof pinned «𝐃𝐚𝐭𝐚 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐈𝐈 Interview Experience at PayPal. I wanted to share my experience interviewing for the 𝐃𝐚𝐭𝐚 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐈𝐈 position at PayPal. Here's a breakdown of the process: 𝐎𝐧𝐥𝐢𝐧𝐞 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 (𝐎𝐀): The first step was an online assessment sent by the recruiter.…»

DataSpoof

07 Jan, 05:21


𝐃𝐚𝐭𝐚 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐈𝐈 Interview Experience at PayPal.

I wanted to share my experience interviewing for the 𝐃𝐚𝐭𝐚 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐈𝐈 position at PayPal.

Here's a breakdown of the process:

𝐎𝐧𝐥𝐢𝐧𝐞 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 (𝐎𝐀):
The first step was an online assessment sent by the recruiter. Clearing this assessment led to two technical rounds being scheduled, separated by a gap of five days.

𝐓𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐑𝐨𝐮𝐧𝐝 𝟏:
This round was with a Data Engineer III and focused on problem-solving and SQL.

𝐀). 𝐃𝐒𝐀 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬:
1. 𝑇ℎ𝑒 𝑅𝑎𝑖𝑛𝑤𝑎𝑡𝑒𝑟 𝑇𝑟𝑎𝑝 𝑃𝑟𝑜𝑏𝑙𝑒𝑚.
2. 𝐴 𝑃𝑟𝑖𝑜𝑟𝑖𝑡𝑦 𝑄𝑢𝑒𝑢𝑒 𝑃𝑟𝑜𝑏𝑙𝑒𝑚 (I don't recall the exact details but was similar to those dealing with task prioritization).

𝐁). 𝐒𝐐𝐋 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬:
Focused on window functions, their usage, and optimization strategies.

𝐓𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐑𝐨𝐮𝐧𝐝 𝟐 (𝐃𝐞𝐬𝐢𝐠𝐧 𝐑𝐨𝐮𝐧𝐝):
This was done with a Staff Data Engineer and had three main parts:

A). 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐃𝐢𝐬𝐜𝐮𝐬𝐬𝐢𝐨𝐧:
Shared details about my past projects. Also discussed best practices for software and data engineering, including how I implemented these in my projects.

B). 𝐃𝐞𝐬𝐢𝐠𝐧 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧:
The scenario involved multiple data sources such as Hadoop, S3, and Oracle DB. I was tasked with designing a solution to migrate data to a final S3 bucket.
Explained my choices for services and tools, including error logging, scalability, and fault tolerance.

C). 𝐒𝐩𝐚𝐫𝐤 𝐂𝐨𝐝𝐢𝐧𝐠 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞:
Given two data frames, I had to perform some processing and store the final output in another data frame.

𝐌𝐚𝐧𝐚𝐠𝐞𝐫𝐢𝐚𝐥 𝐑𝐨𝐮𝐧𝐝 (𝐑𝐨𝐮𝐧𝐝 𝟑):
This was with the Senior Engineering Manager, who was also the hiring manager for this role.

𝐓𝐨𝐩𝐢𝐜𝐬 𝐃𝐢𝐬𝐜𝐮𝐬𝐬𝐞𝐝:
A). 𝐏𝐫𝐨𝐣𝐞𝐜𝐭𝐬 : A deep dive into my projects, focusing on why specific tools and services were chosen.
B). 𝐑𝐞𝐚𝐥 𝐋𝐢𝐟𝐞 𝐒𝐜𝐞𝐧𝐚𝐫𝐢𝐨 :
How I would handle pipeline issues, like overload situations or service downtimes.
Behavioral Questions: Highlighted my problem-solving, teamwork, and adaptability skills.

𝐇𝐑 𝐑𝐨𝐮𝐧𝐝 (𝐑𝐨𝐮𝐧𝐝 𝟒):
The final round was with HR. We discussed the offer details PayPal was providing, covered some standard behavioral questions related to company culture and expectations.

Credit- Shubham shukla

DataSpoof

04 Jan, 09:59


Day 4 is available in our YouTube channel.

Go watch it, like and comments if you have any doubts regarding implementation.

Support us by subscribing aiming for 1000 subscriber so we can uploading machine learning and data science videos also

https://youtu.be/l31_x1ghzPU?si=Bx_S-KtSubncPCJJ

DataSpoof

03 Jan, 14:44


Top 10 GitHub Repositories to Ace Your Next Analytics Interview

These repositories offer an extensive u range of resources, tutorials, and projects to help you excel in data science and analytics interviews:

1. Machine Learning Interview - 9.1k Stars
Link: https://lnkd.in/g68_2wR7

2. 500+ AI Projects List with Code - 20.2k Stars
Link: https://lnkd.in/g2wwkU6c

3. 100 Days of ML Code - 45.2k Stars
Link: https://lnkd.in/ggu4zHp3

4. Awesome Data Science - 25k Stars
Link: https://lnkd.in/gnvvpZjj

5. Data Science For Beginners - 28.1k Stars
Link: https://lnkd.in/gJacHejc

6. Data Science Masters - 24.9k Stars
Link: https://lnkd.in/gXbY6R6C

7. Awesome Artificial Intelligence - 10.8k Stars
Link: https://lnkd.in/gwjPBXkq

8. Homemade Machine Learning - 23k Stars
Link: https://lnkd.in/giM26Ak2

9. Data Science Interviews - 8.9k Stars
Link: https://lnkd.in/gEPM9TYg

10. Data Science Best Resources - 2.9k Stars
Link: https://lnkd.in/g8Q6ammy

DataSpoof

02 Jan, 14:42


DataSpoof pinned «𝗙𝗔𝗔𝗡𝗚 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: How does an ARIMA model work? The most common question if you have a forecasting projects in your resume, or the role requires forecasting experience. To explain this, let's start by breaking down ARIMA, and I mean…»

DataSpoof

02 Jan, 14:42


𝗙𝗔𝗔𝗡𝗚 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻:
How does an ARIMA model work?

The most common question if you have a forecasting projects in your resume, or the role requires forecasting experience.

To explain this, let's start by breaking down ARIMA, and I mean literally -

AR - Auto-regressive component of model.
This assumes the future value depends LINEARLY on past values.

Typically, you use ACF/PACF plot to figure out how many of the past value (or 'p' value of ARIMA).

I - Integrated component of model.
It represents how to difference the values from themselves to make sure mean and variance is constant over time. Typically, you use a statistical test like ADF to figure out how much differencing you need (also called the 'd' value in ARIMA)

MA - Moving Average component of model.
This assumes future values depends LINEARLY on errors in forecasting made in prior time steps. Typically, you use ACF/PACF plot to determine past value (or 'q' values in ARIMA).

Note: You can also use packages like auto_arima in pmdarima in Python to do a grid search over a range of p,d,q parameter to fit your ARIMA model.

ARIMA essentially works by summing the differenced prior values and forecast errors. The reason why this simple formulation is ubiquitous, is because of its effectiveness and adaptability.

It's able to account for stationary and non-stationary time-series.

It can represent future values in terms of the few of the lagged previous values and forecast errors, making it interpretable and less likely to overfit.

It can accommodate seasonality with its seasonal variation SARIMA, and exogenous variable i.e. features that might help predict future values of the time series apart from historical values of the same time series.

Credit- Karun

Follow Abhishek Kumar Singh to learn Python programming, data Science and big data.

#datascience #machinelearning #ai #Python #python3 #sql #deeplearning
#computervision #computerscience #programming #bigdata #architecture #datavisualization #dataanalytics #dataanalysis #dataanalyst #machinelearningalgorithms #machinelearningengineer

DataSpoof

01 Jan, 15:04


In last 2 months our AWs course have 2176 students enrolled and received more than 53+reviews

Get your AWs course at 449 today

https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03-m/?couponCode=EDBA1541FEA21733E639

DataSpoof

30 Dec, 14:50


30 days of Python

Day 3- Python is uploaded on our YouTube channel

Do subscribe and like our Videos for daily Python content

https://youtu.be/ptOH2FBMadE?si=AWnHbq_OGuBMx_Bb

DataSpoof

28 Dec, 13:30


30 days of Python

Day 2- Python is uploaded on our YouTube channel

Do subscribe and like our Videos for daily Python content

https://youtu.be/DFmNCJtQhKU?si=yoqz7_oZDc8FzbSz

DataSpoof

27 Dec, 13:14


30 days of Python

Day 1- Python is uploaded on our YouTube channel

Do subscribe and like our Videos for daily Python content

https://youtu.be/VBk59upcp94?si=AOLD0Uj7H5K3KHHr

DataSpoof

11 Dec, 03:47


Wild! Google just announced that their quantum chip Willow was able to do a computation in 5 minutes that would take current top-tier computers 10,000,000,000,000,000,000,000,000 years to figure out 😳 The 105-qubit chip brings insane error correction, focusing on stability rather than just stacking more qubits. The result? A leap toward practical quantum computing that could revolutionize medicine, AI, and energy in the near future. But here comes the crazy part. As part of the Willow announcement, Google basically confirmed we're living in a multiverse: "It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch." What a time to be alive.


https://www.instagram.com/p/DDbE3U1yeDD/?igsh=MWFjOXc3ZWVqYTNwZw==

DataSpoof

04 Dec, 12:24


Those who have not connected me on LinkedIn you can connect here

Who am I

Data Scientist and a Corporate Trainer

Trained over 5k+ professionals

Worked with 25+ companies

Latest training with Capgemini big data corporate Training Pune

https://www.linkedin.com/posts/abhishek-kumar-singh-8a6326148_datascience-machinelearning-ai-activity-7270048806618963968-QDXG?utm_source=share&utm_medium=member_android

DataSpoof

27 Nov, 06:09


https://www.dataspoof.info/post/streaming-dashboard-in-powerbi

DataSpoof

22 Nov, 05:20


Hurry up limited seats

https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03-m/?couponCode=EDBA1541FEA21733E639

DataSpoof

12 Nov, 01:44


My Amazon SDE Interview experience for the reference of all freshers applying:

(FYI, Amazon just dropped their SDE-1 India University Graduate openings!)


The process:

1️⃣ 1 Online Assessment
2️⃣ 2 Coding rounds
3️⃣ 1 Coding + Leadership Principles round


💻 The interviews:

1️⃣ OA round:
7 basic code-debugging MCQs
2 DSA questions:
- LC 2265. Nodes Equal to Average of Subtree
- LC 68. Text Justification
1 (very lengthy) behavioral question form

- Solved all 7 debugging questions correctly.
- Solved first DSA problem in 10 mins.
- Partially solved second problem, failing few test cases.
- This round went average. But got the interview invite.

Being fast in contests and debugging would help in this round.


2️⃣ Coding round 1:
A BFS-based LeetCode hard problem.
- Quickly coded a BFS + hashmap solution.
- Interviewer had cross-questions but appeared satisfied overall.

If you can solve LC 127. Word Ladder, you’d be fine.


3️⃣ Coding round 2:
> Q1: Nodes at distance K in a binary tree
- Used BFS after creating parent pointers using HashMap.
> Q2: Connect ropes with minimum cost
- Implemented a greedy solution using a priority queue.
- Interviewer liked my speed but gave another problem.
> Q3: Max steps with reduced m
- Gave O(n) solution, then optimized using binary search to O(log n) and later to O(log(sqrt(m))).

Overall, pleasant interview with optimized solutions.

All of the above problems:
- LC 863. All Nodes Distance K in Binary Tree
- GFG. Connect n ropes with minimum cost
- Problem 3 not on the internet. Here’s a playground for it - https://lnkd.in/gsg2Pnmp


4️⃣ Coding + Managerial round:
- LC Hard; Smallest substring in ’s’ containing ’t’ as subsequence
- Came up with a sliding window approach.
- Took 30+ min to explain and code the approach.
- Interviewer was satisfied with my approach, but couldn’t finish coding completely.
- Overall, explained the concept but could have implemented faster.

If you have done LC 76. Minimum Window Substring, you got this one.

Behavioral Questions:
[1] Internship Discussion:
- Day-to-day responsibilities?
- Technologies you worked with, and why?
- Any accomplishments or key learnings?
[2] Amazon Leadership Principles:
- Time when you went above and beyond to meet a customer’s needs? (Customer Obsession)
- Time when you had to make a quick decision with limited information? (Bias for Action)

Decent answers in behavioral round as I had prepped for similar questions.


🎯 Result:
My interview result was positive and a few weeks later, I got the life-altering SDE-1 offer from Amazon

Credit- Harshit sharma

DataSpoof

10 Nov, 14:17


Overfitting happens when a model learns too much detail from training data, including noise, rather than general patterns.

Result: The model performs well on training data but poorly on new, unseen data.

Symptoms: High accuracy on training data, low accuracy on test data.

Cause: Model is too complex (e.g., too many layers, features, or parameters).

Example: Memorizing answers for a specific test rather than understanding concepts.

Solution: Simplify the model, use regularization techniques, or gather more data.

Purpose of Avoiding Overfitting: Ensures the model can generalize and make accurate predictions on new data.

DataSpoof

02 Nov, 14:58


https://dataspoof4081.graphy.com/membership

DataSpoof

30 Oct, 09:00


https://youtu.be/FEC2qrtBgDg?si=iFUIZS_duM5_yenQ

DataSpoof

25 Oct, 10:58


We are launching a premium memberships for our followers

Currently it's contains AWS course for data scientist

Soon we will add following courses
1- Complete MLOPS
2- complete Data analyst
3- Complete ML engineer
4- Complete big data analyst

Support us by becoming a premium member and enjoys the benefits

https://dataspoof4081.graphy.com/membership

DataSpoof

24 Oct, 01:54


https://youtu.be/mcF-tVSSePU?si=CrQv-lt4i8ZdVfZn

DataSpoof

21 Oct, 13:00


Make sure to subscribe to our YouTube channel

https://yt.openinapp.co/aukk5

DataSpoof

20 Oct, 03:29


Complete roadmap of azure data engineers

https://www.instagram.com/p/DBVKLOmTb0j/?igsh=MTFzdGpiYTQ5azNwZg==

DataSpoof

18 Oct, 14:44


Capgemini Data Science Training wrapup

If you are not connected me on LinkedIn you can connect there

https://www.linkedin.com/posts/abhishek-kumar-singh-8a6326148_datascience-machinelearning-ai-activity-7253045178930774016-EVT2?utm_source=share&utm_medium=member_android

DataSpoof

17 Oct, 01:56


For any queries dm me on whatsapp
+9183182 38637

DataSpoof

03 Oct, 03:32


Crash Course on AWS IAM

https://youtu.be/mcYjgf1qozU?si=D5DkIbZ1oNMqbrDH

DataSpoof

21 Sep, 13:13


Smash the subscribe button
https://yt.openinapp.co/csqv6

DataSpoof

19 Sep, 13:08


In this crash course you will learn about

* Introduction to S3
* How to create a bucket, upload a file and delete file from bucket
* Copy and move files in S3
* Storage class in S3
* Lifecycle Policies in S3 bucket
* cross region replication in S3
* Host a static website in S3
* Requestor Pays in S3
* Object lock in S3
* Encryption in AWS S3
* Transfer acceleration in AWS S3
* multipart upload in AWS S3
* various types of commands in cloudshell


https://youtu.be/GOVO_md7D3Y?si=lBi8irAqvJfI2VVy

DataSpoof

17 Sep, 02:02


Youtube.

https://yt.openinapp.co/csqv6

DataSpoof

13 Sep, 13:21


How to host a static website on S3

https://youtu.be/QqiKfLQwwDY?si=_RclZPUQld9UqIH4

DataSpoof

09 Sep, 14:07


DataSpoof pinned Deleted message

DataSpoof

08 Sep, 10:46


Aws training review

DataSpoof

02 Sep, 04:03


Data analytics training curriculum

* Python
* SQL
* NoSQL
* Tableau
* PowerBI
* Excel
* Aws

Aiming for 1000 subscribers on YOUTUBE.

We will upload data analytics for free. It is of 60 hours duration

https://yt.openinapp.co/csqv6

DataSpoof

29 Aug, 13:55


https://youtu.be/5s5XmpJtct0?si=wY5t0MnL_LMFM4PI

DataSpoof

24 Aug, 09:20


Accenture Data Scientist Interview Questions!

1st round-

Technical Round

- 2 SQl questions based on playing around views and table, which could be solved by both subqueries and window functions.

- 2 Pandas questions , testing your knowledge on filtering , concatenation , joins and merge.

- 3-4 Machine Learning questions completely based on my Projects, starting from
Explaining the problem statements and then discussing the roadblocks of those projects and some cross questions.

2nd round-

- Couple of python questions agains on pandas and numpy and some hypothetical data.

- Machine Learning projects explanations and cross questions.

- Case Study and a quiz question.

3rd and Final round.

HR interview

Simple Scenerio Based Questions.

Finally

I was offered a CTC of ××× LPA plus Joining Bonus

Credit - Shubhankit

DataSpoof

23 Aug, 01:03


Do like and subscribe to our YouTube channel

DataSpoof

23 Aug, 01:02


https://youtu.be/S-LujUcL6Bk?si=70qW_jJRHrUY_nmw