Deep Learning, Computer Vision and NLP @awesomedeeplearning Channel on Telegram

Deep Learning, Computer Vision and NLP

@awesomedeeplearning


Deep Learning๐Ÿ’ก,
Computer Vision ๐Ÿ“ฝ๏ธ &
#Ai ๐Ÿง 

Get #free_books,
#Online_courses,
#Research_papers,
#Codes, and #Projects,
Tricks and Hacks, coding, training Stuff

Suggestion @AIindian

Deep Learning, Computer Vision and NLP (English)

Are you passionate about diving deep into the realms of Deep Learning, Computer Vision, and NLP? Look no further than the 'awesomedeeplearning' Telegram channel! This channel is a treasure trove of knowledge and resources for anyone interested in AI and machine learning. From free books to online courses, research papers to coding projects, this channel provides it all. Stay updated with the latest tricks, hacks, and training materials in the field of artificial intelligence. Whether you are a seasoned professional or just starting out, 'awesomedeeplearning' has something for everyone. Join now and embark on an exciting journey of learning and discovery. Suggestion: @AIindian

Deep Learning, Computer Vision and NLP

12 Oct, 05:21


Uber used RAG and AI agents to build its in-house Text-to-SQL, saving 140,000 hours annually in query writing time. ๐Ÿ“ˆ
Hereโ€™s how they built the system end-to-end:

The system is called QueryGPT and is built on top of multiple agents each handling a part of the pipeline.

1. First, the Intent Agent interprets user intent and figures out the domain workspace which is relevant to answer the question (e.g., Mobility, Billing, etc).

2. The Table Agent then selects suitable tables using an LLM, which users can also review and adjust.

3. Next, the Column Prune Agent filters out any unnecessary columns from large tables using RAG. This helps the schema fit within token limits.

4. Finally, QueryGPT uses Few-Shot Prompting with selected SQL samples and schemas to generate the query.

QueryGPT reduced query authoring time from 10 minutes to 3, saving over 140,000 hours annually!

Link to the full article: https://www.uber.com/en-IN/blog/query-gpt/?uclick_id=6cfc9a34-aa3e-4140-9e8e-34e867b80b2b

Deep Learning, Computer Vision and NLP

06 Oct, 02:50


How Much GPU Memory Needed To Server A LLM ?

This is a common question that consistnetly comes up in interview or during the disscusiion with your business stakeholders.

And itโ€™s not just a random question โ€” itโ€™s a key indicator of how well you understand the deployment and scalability of these powerful models in production.

As a data scientist understanding and estimating the require GPU memory is essential.

LLM's (Large Language Models) size vary from 7 billion parameters to trillions of parameters. One size certainly doesnโ€™t fit all.

Letโ€™s dive into the math that will help you estimate the GPU memory needed for deploying these models effectively.

๐“๐ก๐ž ๐Ÿ๐จ๐ซ๐ฆ๐ฎ๐ฅ๐š ๐ญ๐จ ๐ž๐ฌ๐ญ๐ข๐ฆ๐š๐ญ๐ž ๐†๐๐” ๐ฆ๐ž๐ฆ๐จ๐ซ๐ฒ ๐ข๐ฌ

General formula, ๐ฆ = ((๐ * ๐ฌ๐ข๐ณ๐ž ๐ฉ๐ž๐ซ ๐ฉ๐š๐ซ๐š๐ฆ๐ž๐ญ๐ž๐ซ)/๐ฆ๐ž๐ฆ๐จ๐ซ๐ฒ ๐๐ž๐ง๐ฌ๐ข๐ญ๐ฒ) * ๐จ๐ฏ๐ž๐ซ๐ก๐ž๐š๐ ๐Ÿ๐š๐œ๐ญ๐จ๐ซ

Where:
- ๐ฆ is the GPU memory in Gigabytes.
- ๐ฉ is the number of parameters in the model.
- ๐ฌ๐ข๐ณ๐ž ๐ฉ๐ž๐ซ ๐ฉ๐š๐ซ๐š๐ฆ๐ž๐ญ๐ž๐ซ typically refers to the bytes needed for each model parameter, which is typically 4 bytes for float32 precision.
- ๐ฆ๐ž๐ฆ๐จ๐ซ๐ฒ ๐๐ž๐ง๐ฌ๐ข๐ญ๐ฒ (q) refer to the number of bits typically processed in parallel, such as 32 bits for a typical GPU memory channel.
- ๐จ๐ฏ๐ž๐ซ๐ก๐ž๐š๐ ๐Ÿ๐š๐œ๐ญ๐จ๐ซ is often applied (e.g., 1.2) to account for additional memory needed beyond just storing parameters, such as activations, temporary tensors, and any memory fragmentation or padding.

๐’๐ข๐ฆ๐ฉ๐ฅ๐ข๐Ÿ๐ข๐ž๐ ๐…๐จ๐ซ๐ฆ๐ฎ๐ฅ๐š:

M = ((P * 4B)/(32/Q)) * 1.2

With this formula in hand, I hope you'll feel more confident when discussing GPU memory requirements with your business stakeholders.

#LLM

Deep Learning, Computer Vision and NLP

30 Sep, 14:13


Best article on GenAI getting started
https://blog.bytebytego.com/p/where-to-get-started-with-genai

Deep Learning, Computer Vision and NLP

29 Jul, 12:29


The matrix calculus for Deep Learning. Very well written. https://explained.ai/matrix-calculus/

Deep Learning, Computer Vision and NLP

28 Jun, 05:03


Early result of Gemma 2 on the leaderboard, matching Llama-3-70B.

- Full data at leaderboard.lmsys.org
- Chat with Gemma 2 at chat.lmsys.org
- Gemma 2 blog goo.gle/3RLQXUa

Deep Learning, Computer Vision and NLP

09 May, 13:51


Ai / Computer Vision Bootcamp๐Ÿš€

Learn Ai / Computer Vision from Basics to Deployment from IITian and COEPian.

โœ… Build Face Recognition โ˜บ๏ธ
โœ… Build Ai Object detection ๐Ÿ๏ธ๐Ÿš‡โœˆ๏ธ
โœ… Building Social Distancing App
โœ… Build Automated Invoice reader ๐Ÿ“ƒ
โœ…Image Classification ๐Ÿ˜๐Ÿฅ—
โœ… Build Application of Computer Vision in Healthcare, Automotive, retail, Manufacturing and Security, Surveillance. ๐Ÿ“ธ

+40 Hrs sessions.
+12 Weeks.
+13 Tools & Technology.
+7 Projects.
+7 Homework Assignments.
+5 Case studies
+5 Skills.
+5 Domains.

๐Ÿ“Œ Remote and weekend sessions.
๐Ÿ“Œ Starting from basics.
๐Ÿ“Œ Get Certificate.

Duration: 3 months

Attend the 1st FREE session on 11th May: https://chat.whatsapp.com/BibIwuuUEWrGEWdZHYluNe

For registrations: https://aiindia.ai/cv-bootcamp/

Deep Learning, Computer Vision and NLP

23 Apr, 16:41


Microsoft just casually shared their new Phi-3 LLMs less than a week after Llama 3 release. Based on the benchmarks in technical report (https://arxiv.org/abs/2404.14219), even the smallest Phi-3 model beats Llama 3 8B despite being less than half the size.

Phi-3 has "only" been trained on 5x fewer tokens than Llama 3 (3.3 trillion instead of 15 trillion)

Phi-3-mini less has "only" 3.8 billion parameters, less than half the size of Llama 3 8B.

Despite being small enough to be deployed on a phone (according to report), it matches the performance of the much larger Mixtral 8x7B and GPT-3.5. (Phi-3 mini can be quantized to 4-bits, so it only requires โ‰ˆ 1.8GB of memory.)

What is the secret sauce? According to the technical report, it's dataset quality over quantity: "heavily filtered web data and synthetic data".

Next to the 4k context-window version, there's also a phi-3-mini-128K model that supports up to 128k tokens.

Fun fact: Phi-3 uses the same tokenizer with a vocabulary size of 32,064 as Llama 2.

Deep Learning, Computer Vision and NLP

12 Apr, 05:50


Seems like GPT-4 is back on top with their latest update. Will have to wait and see how it compares in real time tests.

Deep Learning, Computer Vision and NLP

11 Jan, 15:53


Good material on Developing AI systems in Medical Imaging: https://aiformedicalimaging.blogspot.com/2023/09/things-to-consider-when-developing-ai.html

Deep Learning, Computer Vision and NLP

01 Jan, 13:57


Retrieval-Augmented Generation for Large Language Models: A survey

This paper is a must read.

It covers everything you need to know about the RAG framework and its limitations. It also lists different state-of-the-art techniques to boost its performance in retrieval, augmentation, and generation.

The ultimate goal behind these techniques is to make this framework ready for scalability and production use, especially for use cases and industries where answer quality matters *a lot*.

These are the key ideas the paper discusses to make your RAG more efficient:

- ๐Ÿ—ƒ๏ธ Enhance the quality of indexed data by removing duplicate/redundant information and adding mechanisms to refresh outdated documents

- ๐Ÿ› ๏ธ Optimize index structure by determining the right chunk size through quantitative evaluation

- ๐Ÿท๏ธ Add metadata (e.g. date, chapters, or subsection) to the indexed documents to incorporate filtering functionalities that enhance efficiency and relevance

- โ†”๏ธ Align the input query with the documents by indexing the chunks of data by the questions they answer

- ๐Ÿ” Mixed retrieval: combine different search techniques like keyword-based and semantic search

- ๐Ÿ”„ ReRank: sort the retrieved documents to maximize diversity and optimize the similarity with a ยซ template answer ยป

- ๐Ÿ—œ๏ธ Prompt compression: remove irrelevant context

- ๐Ÿ’ก HyDE: generate a hypothetical answer to the input question and use it (with the query) to improve the search

- โœ’๏ธ Query rewrite and expansion to reformulate the userโ€™s intent and remove ambiguity

Link: https://arxiv.org/abs/2312.10997

Deep Learning, Computer Vision and NLP

21 Dec, 02:13


Think of an LLM that can find entities in a given image, describe the image and answers questions about it, without hallucinating โœจ

Kosmos-2 released by Microsoft is a very underrated model that can do that. โ˜ƒ๏ธ Not only this, but Hugging Face transformers integration makes it super easy to use!
Colab link:
https://colab.research.google.com/drive/1t25qM_lOM-HQG6Wg3aRiF4LOuQMN5lUF?usp=sharing

Deep Learning, Computer Vision and NLP

27 Nov, 05:15


How big do LLMs need to be able to reason?๐Ÿค” Microsoft released Orca 2 this week, a 13B Llama-based LLM trained on complex tasks and reasoning. ๐Ÿง Orca's performance comes from its use of synthetically generated data from bigger LLMs. I took a deeper look at paper and extracted the implementation and other insights.

๐—œ๐—บ๐—ฝ๐—น๐—ฒ๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐˜๐—ถ๐—ผ๐—ป:
1๏ธโƒฃ Constructed a new dataset (Orca 2) with ~817K samples using prompts from FLAN, and GPT-4 to generate reasoning responses with the help of detailed system prompts.
2๏ธโƒฃ Grouped prompts into categories based on similarity to assign tailored system prompt that demonstrate different reasoning techniques.
3๏ธโƒฃ Replaced the original system prompt with a more generic one, to have the model learn the underlying reasoning strategy (Prompt erasing).
4๏ธโƒฃ Used progressive learning, starting with finetune Llama on FLAN-v2 (1 ep) , retrain on 5M ChatGPT data from Orca 1 (3 ep), combine 1M GPT-4 data from Orca 1 & 800k new Orca 2 data for final training (4 ep).

๐—œ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:
๐Ÿ“Š Imitation learning can improve capabilities with enough data.
๐Ÿ”ฌ Reasoning and longer generations to get the correct answer help smaller models to compete with bigger LLMs.
๐Ÿ’ซ Prompt Erasing helped Orca to โ€œlearnโ€ reasoning
๐ŸŽฏ Lowest hallucination rates of comparable models on summarization
โš™๏ธ Used packing for training, concatenating multiple examples into one sequence.
๐Ÿ‘จโ€๐Ÿฆฏ Masked user & system inputs (prompt) and only used generation for loss
๐Ÿ–ฅ Trained on 32 A100 for 80h

Paper: https://huggingface.co/papers/2311.11045
Model: https://huggingface.co/microsoft/Orca-2-13b

Deep Learning, Computer Vision and NLP

17 Nov, 11:33


Its not only about LLMs......

๐ŸŒŸ Microsoft Introduces Florence-2, a Breakthrough in Computer Vision!

๐Ÿ‘‰ Microsoft has just unveiled Florence-2, a revolutionary foundation model designed for various computer vision and vision-language tasks. This new model simplifies the process by using one backbone for multiple tasks. Read more about it in the paper and project details provided below ๐Ÿ‘‡

Key Highlights:

โœ… Achieves state-of-the-art performance in various tasks
โœ… Employs a unified, prompt-based representation for vision tasks
โœ… Features the FLD-5B dataset, boasting over 5 billion annotations with 126 million pictures
โœ… Handles detection, captioning, and groundingโ€”all with a single model
โœ… Streamlined with a uniform set of parameters governing everything

Deep Learning, Computer Vision and NLP

14 Nov, 14:31


Finally, we have a hallucination leaderboard! ๐Ÿ˜๐Ÿ˜

Key Takeaways

๐Ÿ“ Not surprisingly, GPT-4 is the lowest.

๐Ÿ“ Open source LLama 2 70 is pretty competitive!

๐Ÿ“ Google's models are the lowest. Again, this is not surprising given that the #1 reason Bard is not usable is its high hallucination rate.

Really cool that we are beginning to do these evaluations and capture them in leaderboards!

Deep Learning, Computer Vision and NLP

13 Nov, 15:18


JARVIS-1: Open-Ended Multi-task Agents with Memory-Augmented Multimodal Language Models

abs: arxiv.org/abs/2311.05997
project page: craftjarvis-jarvis1.github.io

"We introduce JARVIS-1, an open-ended agent that can perceive multimodal input (visual observations and human instructions), generate sophisticated plans, and perform embodied control, all within the popular yet challenging open-world Minecraft universe."

Deep Learning, Computer Vision and NLP

31 Oct, 08:23


https://abvijaykumar.medium.com/fine-tuning-llm-parameter-efficient-fine-tuning-peft-lora-qlora-part-1-571a472612c4

Deep Learning, Computer Vision and NLP

09 Oct, 12:21


https://huggingface.co/blog/4bit-transformers-bitsandbytes

Deep Learning, Computer Vision and NLP

26 Sep, 16:41


Low-Rank Adaptation of Large Language Models (LoRA) is a training method that accelerates the training of large models while consuming less memory.

๐Ÿค– LoRA is like a special trick that helps computers learn better and faster. Imagine a computer trying to learn new things, like recognizing pictures or understanding language. When it learns, it uses something called "weights," which are like little helpers inside the computer.

๐Ÿš€ Now, LoRA's trick is to make these little helpers work smarter. Instead of changing all the helpers every time the computer learns something new, LoRA only changes a few of them. It's like having a big group of friends, but only a couple of them have to do the hard work, and the others can rest.

๐Ÿ’กHere's how it works:
1. The computer has these helpers (weights) that it uses to learn.
2. LoRA makes two special groups of helpers that are smaller and easier to work with.
3. The computer trains these special groups of helpers to learn new things without changing all the original helpers.
4. After training, the computer combines the new helpers with the original ones, like mixing two colors to get a new color.
5. This makes the computer learn faster and doesn't use too much computer memory.

๐Ÿš€ The good things about LoRA are:
- It helps the computer learn without using too many helpers, so it's faster.
- The original helpers stay the same, so we can use them for different tasks.
- It can work with other tricks that make computers smarter.
- The computer works just as fast when using LoRA, so we don't have to wait.

So, LoRA is like a cool trick that helps computers learn better and faster without making them slow down. It's like having a superhero team of helpers inside the computer! โค๏ธ

Deep Learning, Computer Vision and NLP

19 Sep, 07:59


Connect with us on WhatsApp: https://whatsapp.com/channel/0029Va8iIT7KbYMOIWdNVu2Q

Deep Learning, Computer Vision and NLP

11 Sep, 15:33


Dear friends,

Iโ€™d like to share a part of the origin story of large language models that isnโ€™t widely known.

A lot of early work in natural language processing (NLP) was funded by U.S. military intelligence agencies that needed machine translation and speech recognition capabilities. Then, as now, such agencies analyzed large volumes of text and recorded speech in various languages. They poured money into research in machine translation and speech recognition over decades, which motivated researchers to give these applications disproportionate attention relative to other uses of NLP.

This explains why many important technical breakthroughs in NLP stem from studying translation โ€” more than you might imagine based on the modest role that translation plays in current applications. For instance, the celebrated transformer paper, โ€œAttention is All You Needโ€ by the Google Brain team, introduced a technique for mapping a sentence in one language to a translation in another. This laid the foundation for large language models (LLMs) like ChatGPT, which map a prompt to a generated response.

Or consider the BLEU score, which is occasionally still used to evaluate LLMs by comparing their outputs to ground-truth examples. It was developed in 2002 to measure how well a machine-generated translation compares to a ground truth, human-created translation.

A key component of LLMs is tokenization, the process of breaking raw input text into sub-word components that become the tokens to be processed. For example, the first part of the previous sentence may be divided into tokens like this:

/A /key /component /of /LL/Ms/ is/ token/ization

The most widely used tokenization algorithm for text today is Byte Pair Encoding (BPE), which gained popularity in NLP after a 2015 paper by Sennrich et al. BPE starts with individual characters as tokens and repeatedly merges tokens that occur together frequently. Eventually, entire words as well as common sub-words become tokens. How did this technique come about? The authors wanted to build a model that could translate words that werenโ€™t represented in the training data. They found that splitting words into sub-words created an input representation that enabled the model, if it had seen โ€œtokenโ€ and โ€œization,โ€ to guess the meaning of a word it might not have seen before, such as โ€œtokenization.โ€

I donโ€™t intend this description of NLP history as advocacy for military-funded research. (I have accepted military funding, too. Some of my early work in deep learning at Stanford University was funded by DARPA, a U.S. defense research agency. This led directly to my starting Google Brain.) War is a horribly ugly business, and I would like there to be much less of it. Still, I find it striking that basic research in one area can lead to broadly beneficial developments in others. In similar ways, research into space travel led to LED lights and solar panels, experiments in particle physics led to magnetic resonance imaging, and studies of bacteriaโ€™s defenses against viruses led to the CRISPR gene-editing technology.

So itโ€™s especially exciting to see so much basic research going on in so many different areas of AI. Who knows, a few years hence, what todayโ€™s experiments will yield?

Keep learning!
Andrew NG

3,155

subscribers

31

photos

5

videos