RIML Lab @rimllab Channel on Telegram

RIML Lab

@rimllab


Robust and Interpretable Machine Learning Lab,
Prof. Mohammad Hossein Rohban,
Sharif University of Technology

https://www.aparat.com/mh_rohban

twitter.com/MhRohban

https://www.linkedin.com/company/robust-and-interpretable-machine-learning-lab/

RIML Lab (English)

Are you passionate about machine learning and eager to explore the latest advancements in the field? Look no further than RIML Lab! RIML Lab, short for the Robust and Interpretable Machine Learning Lab, is led by the esteemed Prof. Mohammad Hossein Rohban at Sharif University of Technology. This Telegram channel serves as a hub for individuals interested in delving into the world of machine learning, particularly focusing on robust and interpretable methodologies.

Prof. Mohammad Hossein Rohban, with his extensive knowledge and expertise, brings forth cutting-edge research and insights to help enthusiasts stay updated with the ever-evolving landscape of machine learning. By joining RIML Lab on Telegram, you not only gain access to valuable resources and educational content but also become part of a vibrant community of like-minded individuals passionate about pushing the boundaries of machine learning.

Whether you are a beginner looking to kickstart your journey in machine learning or an experienced professional seeking to deepen your understanding, RIML Lab offers something for everyone. Stay connected with Prof. Mohammad Hossein Rohban through the provided links to Aparat, Twitter, and LinkedIn, where you can further engage with the lab's research and connect with fellow members. Don't miss out on this incredible opportunity to be at the forefront of machine learning innovation. Join RIML Lab today and elevate your knowledge to new heights!

RIML Lab

12 Feb, 16:17


🚀 We will be live from 19:45. Join us here:
https://www.youtube.com/watch?v=Y4UZNc4eh4U

🎙 Title: The Increasing Role of Sensorimotor Experience in Artificial Intelligence
👨‍🏫 Speaker: Rich Sutton (Keen Technologies, University of Alberta, OpenMind Research Institute)

RIML Lab

09 Feb, 21:45


🎥 فیلم جلسه اول درس System 2
🔸 موضوع: Introduction & Motivation
🔸 مدرسین: دکتر رهبان و آقای سمیعی
🔸 تاریخ: ۲۱ بهمن ۱۴۰۳
🔸لینک‌ یوتیوب
🔸 لینک آپارات

RIML Lab

09 Feb, 20:09


🚀 Join Richard Sutton’s Talk at Sharif University of Technology

🎙 Title: The Increasing Role of Sensorimotor Experience in Artificial Intelligence
👨‍🏫 Speaker: Rich Sutton (Keen Technologies, University of Alberta, OpenMind Research Institute)
📅 Date: Wednesday
🕗 Time: 8 PM Iran Time
💡 Sign Up Here: https://forms.gle/q1M7qErWvydFxR9m6

RIML Lab

03 Feb, 20:04


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Correcting Diffusion Generation through Resampling


🔸 Presenter: Ali Aghayari

🌀 Abstract:
This paper addresses distributional discrepancies in diffusion models, which cause missing objects in text-to-image generation and reduced image quality. Existing methods overlook this root issue, leading to suboptimal results. The authors propose a particle filtering framework that uses real images and a pre-trained object detector to measure and correct these discrepancies through resampling. Their approach improves object occurrence by 5% and FID by 1.0 on MS-COCO, outperforming previous methods in generating more accurate and higher-quality images.


📄 Papers: Correcting Diffusion Generation through Resampling


Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 5:30 - 6:30 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

RIML Lab

31 Jan, 04:14


Research Assistant Position Available

The Robust and Interpretable Machine Learning (RIML) Lab at the Computer Engineering Department of Sharif University of Technology is seeking a number of highly motivated and talented research assistants to join our team to work on Large Language Model (LLM) Agents.

Qualifications:
• M.Sc. in Computer Science, Computer Engineering, or a related field earned at most in the last 2 years
• Strong background in natural language processing, machine learning, and artificial intelligence
• Experience with large language models and their applications
• Excellent programming skills (Python, and PyTorch, etc.)
• Excellent communication and teamwork skills

Interested candidates should submit the following documents to [email protected] by Feb. 12th:

• A cover letter describing their research/career goals and why they are interested in this position.
• A detailed CV, including a list of publications

For more information about our recent research topics, please check out my google scholar: https://scholar.google.com/citations?hl=en&user=pRyJ6FkAAAAJ&view_op=list_works&sortby=pubdate.

RIML Lab

31 Jan, 03:57


Postdoctoral Research Position Available

The Robust and Interpretable Machine Learning (RIML) Lab at the Computer Engineering Department of Sharif University of Technology is seeking a number of highly motivated and talented postdoctoral researchers to join our team. The successful candidate will work on cutting-edge research involving Large Language Model (LLM) Agents.

• 1-2 years, with the possibility of extension based on performance and funding
• Conduct innovative research on LLM Agents
• Collaborate with a multidisciplinary team of researchers
• Publish high-quality research papers in top-tier conferences and journals
• Mentor graduate and undergraduate students
• Present research findings at international conferences and workshops

Qualifications:
• Ph.D. in Computer Science, Computer Engineering, or a related field earned at most in the last 2 years
• Strong background in natural language processing, machine learning, and artificial intelligence
• Experience with large language models and their applications
• Excellent programming skills (Python, and PyTorch, etc.)
• Strong publication record in relevant areas
• Excellent communication and teamwork skills

Interested candidates should submit the following documents to [email protected] by Feb. 7th:
• A cover letter describing your research interests and career goals
• A detailed CV, including a list of publications
• Contact information for at least two references

For more information about our recent research topics, please check out my google scholar: https://scholar.google.com/citations?hl=en&user=pRyJ6FkAAAAJ&view_op=list_works&sortby=pubdate.

RIML Lab

29 Jan, 15:51


Research Position at the  Sharif Center for Information Systems and Data Science:

We are seeking several highly skilled students for a project with a deadline  for the NeurIPS conference, focusing on predictive maintenance for batteries and bearings.
Candidates should have strong abilities in precise implementation and integrating new ideas into various architectures such as contrastive learning, transformers, PINN(Physics-informed neural networks) and diffusion models to rapidly enhance the research group's capabilities.

The project is under the direct collaboration of Dr. Babak Khalaj, Dr. Siavash Ahmadi, and Dr. Mohammad Hossein Rohban.

To apply and submit your CV, please contact via email: [email protected]

RIML Lab

28 Jan, 17:23


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step


🔸 Presenter: Amir Kasaei

🌀 Abstract:

This paper explores the use of Chain-of-Thought (CoT) reasoning to improve autoregressive image generation, an area not widely studied. The authors propose three techniques: scaling computation for verification, aligning preferences with Direct Preference Optimization (DPO), and integrating these methods for enhanced performance. They introduce two new reward models, PARM and PARM++, which adaptively assess and correct image generations. Their approach improves the Show-o model, achieving a +24% gain on the GenEval benchmark and surpassing Stable Diffusion 3 by +15%.


📄 Papers: Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step


Session Details:
- 📅 Date: Wednesday
- 🕒 Time: 2:15 - 3:15 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

RIML Lab

26 Jan, 14:05


جلسه‌ی امروز متاسفانه برگزار نخواهد شد
سایر جلسات از طریق همین کانال اطلاع رسانی خواهد شد

RIML Lab

26 Jan, 06:47


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step


🔸 Presenter: Amir Kasaei

🌀 Abstract:
This paper explores the use of Chain-of-Thought (CoT) reasoning to improve autoregressive image generation, an area not widely studied. The authors propose three techniques: scaling computation for verification, aligning preferences with Direct Preference Optimization (DPO), and integrating these methods for enhanced performance. They introduce two new reward models, PARM and PARM++, which adaptively assess and correct image generations. Their approach improves the Show-o model, achieving a +24% gain on the GenEval benchmark and surpassing Stable Diffusion 3 by +15%.


📄 Papers: Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step


Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:30 - 6:30 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

RIML Lab

14 Jan, 08:21


📣 TA Application Form

🤖 Course: System-2 AI
🧑🏻‍🏫 Instructors: Dr. Rohban, Dr. Soleymani, Mr. Samiei
Deadline: January 23rd

https://docs.google.com/forms/d/e/1FAIpQLSewqI25q5c3DcsdcCzhCVg42motC2S-bg_xuuPWZ0wA60rYHQ/viewform?usp=dialog

RIML Lab

17 Dec, 14:38


📣 TA Application Form

🤖 Deep Reinforcement Learning
🧑🏻‍🏫 Dr. Mohammad Hossein Rohban
Deadline: December 31th

https://docs.google.com/forms/d/e/1FAIpQLSduvRRAnwi6Ik9huMDFWOvZqAWhr7HHlHjXdZbst55zSv5Hmw/viewform

RIML Lab

16 Dec, 07:27


با سلام. اسلایدهای ارائه هفته پژوهش در مورد مقاله نوریپس پذیرفته شده از RIML خدمت عزیزان تقدیم می‌شود. همینطور در این رشته توییت توضیحاتی در مورد مقاله داده‌ام: https://x.com/MhRohban/status/1867803097596338499

RIML Lab

03 Dec, 15:51


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing

🔸 Presenter: Dr Rohban

🌀 Abstract:
This innovative framework addresses the limitations of current image generation models in handling intricate text prompts and ensuring reliability through verification and self-correction mechanisms. Coordinated by a multimodal large language model (MLLM) agent, GenArtist integrates a diverse library of tools, enabling seamless task decomposition, step-by-step execution, and systematic self-correction. With its tree-structured planning and advanced use of position-related inputs, GenArtist achieves state-of-the-art performance, outperforming models like SDXL and DALL-E 3. This session will delve into the system’s architecture and its groundbreaking potential for advancing image generation and editing tasks.


📄 Papers: GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing


Session Details:
- 📅 Date: Wednesday
- 🕒 Time: 3:30 - 4:30 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

RIML Lab

16 Nov, 08:13


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Counting Understanding in Visoin Lanugate Models

🔸 Presenter: Arash Marioriyad

🌀 Abstract:
Counting-related challenges represent some of the most significant compositional understanding failure modes in vision-language models (VLMs) such as CLIP. While humans, even in early stages of development, readily generalize over numerical concepts, these models often struggle to accurately interpret numbers beyond three, with the difficulty intensifying as the numerical value increases. In this presentation, we explore the counting-related limitations of VLMs and examine the proposed solutions within the field to address these issues.

📄 Papers:
- Teaching CLIP to Count to Ten (ICCV, 2023)
- CLIP-Count: Towards Text-Guided Zero-Shot Object Counting (ACM-MM, 2023)


Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

13 Nov, 09:42


🚨 Open Research Position: Visual Anomaly Detection

We announce that there is an open research position in the RIML lab at Sharif University of Technology, supervised by Dr. Rohban.

🔍 Project Description:
Industrial inspection and quality control are among the most prominent applications of visual anomaly detection. In this context, the model is given a training set of solely normal samples to learn their distribution. During inference, any sample that deviates from this established normal distribution, should be recognized as an anomaly.
This project aims to improve the capabilities of existing models, allowing them to detect intricate anomalies that extend beyond conventional defects.

Introductory Paper:
Deep Industrial Image Anomaly Detection: A Survey

Requirements:
- Good understanding of deep learning concepts
- Fluency in Python, PyTorch
- Willingness to dedicate significant time

Submit your application here:
Application Form

Application Deadline:
2024/11/22 (23:59 UTC+3:30)

If you have any questions, contact:
@sehbeygi79

RIML Lab

09 Nov, 16:32


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control

🔸 Presenter: Arshia Hemmat

🌀 Abstract:
This presentation introduces advancements in addressing compositional challenges in text-to-image (T2I) generation models. Current diffusion models often struggle to associate attributes accurately with the intended objects based on text prompts. To address this, a new Edge Prediction Vision Transformer (EPViT) is introduced for improved image-text alignment evaluation. Additionally, the proposed Focused Cross-Attention (FCA) mechanism uses syntactic constraints from input sentences to enhance visual attention maps. DisCLIP embeddings further disentangle multimodal embeddings, improving attribute-object alignment. These innovations integrate seamlessly into state-of-the-art diffusion models, enhancing T2I generation quality without additional model training.

📄 Paper: Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control


Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

02 Nov, 20:57


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Backdooring Bias into Text-to-Image Models

🔸 Presenter: Mehrdad Aksari Mahabadi

🌀 Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

📄 Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

28 Oct, 12:44


Research Position at the Sharif Information Systems and Data Science Center
 
Project Description: Anomaly detection in time series on various datasets, including those related to autonomous vehicle batteries, predictive maintenance, and determining remaining useful life (RUL) upon anomaly detection in products, particularly electric vehicle batteries. The paper deadline for this project is by the end of February. The project also involves the use of federated learning algorithms to support multiple local devices in anomaly detection, RUL estimation, and predictive maintenance on each local device.
 
Technical Requirements: Two electrical or computer engineering students with strong skills in deep learning, robustness concepts, time series anomaly detection, federated learning algorithms, and a creative mindset, strong and clean implementation skills.
 
Benefits: Access to a new, well-equipped lab and Research under the supervision of three professors in Electrical and Computer Engineering.

Dr. Babak  Khalaj
Dr. Siavash Ahmadi
Dr. Mohammad Hossein Rohban

Please send your CV, with the subject line "Research Position in Time Series Anomaly Detection,"
to the email address: [email protected].

RIML Lab

27 Oct, 11:07


جلسه‌ی امروز متاسفانه برگزار نخواهد شد
سایر جلسات از طریق همین کانال اطلاع رسانی خواهد شد

RIML Lab

26 Oct, 15:23


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Backdooring Bias into Text-to-Image Models

🔸 Presenter: Mehrdad Aksari Mahabadi

🌀 Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

📄 Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

19 Oct, 17:51


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization

🔸 Presenter: Amir Kasaei

🌀 Abstract:
Recent advancements in diffusion models, like Stable Diffusion, have shown impressive image generation capabilities, but ensuring precise alignment with text prompts remains a challenge. This presentation introduces Initial Noise Optimization (InitNO), a method that refines initial noise to improve semantic accuracy in generated images. By evaluating and guiding the noise using cross-attention and self-attention scores, the approach effectively enhances image-prompt alignment, as demonstrated through rigorous experimentation.


📄 Paper: InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

06 Oct, 08:25


🚨Open Position: Visual Compositional Generation Research 🚨

We are excited to announce an open research position for a project under Dr. Rohban at the RIML Lab (Sharif University of Technology). The project focuses on improving text-to-image generation in diffusion-based models by addressing compositional challenges.

🔍 Project Description:

Large-scale diffusion-based models excel at text-to-image (T2I) synthesis, but still face issues like object missing and improper attribute binding. This project aims to study and resolve these compositional failures to improve the quality of T2I models.

Key Papers:
- T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional T2I Generation
- Attend-and-Excite: Attention-Based Semantic Guidance for T2I Diffusion Models
- If at First You Don’t Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection
- ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization

🎯 Requirements:

- Must: PyTorch, Deep Learning,
- Recommended: Transformers and Diffusion Models.
- Able to dedicate significant time to the project.


🗓 Important Dates:

- Application Deadline: 2024/10/12 (23:59 UTC+3:30)

📌 Apply here:
Application Form

For questions:
📧 [email protected]
💬 @amirkasaei

@RIMLLab
#research_application
#open_position

RIML Lab

05 Oct, 13:10


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: A semiotic methodology for assessing the compositional effectiveness of generative text-to-image models

🔸 Presenter: Amir Kasaei

🌀 Abstract:
A new methodology for evaluating text-to-image generation models is being proposed, addressing limitations in current evaluation techniques. Existing methods, which use metrics such as fidelity and CLIPScore, often combine criteria like position, action, and photorealism in their assessments. This new approach adapts model analysis from visual semiotics, establishing distinct visual composition criteria. It highlights three key dimensions: plastic categories, multimodal translation, and enunciation, each with specific sub-criteria. The methodology is tested on Midjourney and DALL·E, providing a structured framework that can be used for future quantitative analyses of generated images.

📄 Paper: A semiotic methodology for assessing the compositional effectiveness of generative text-to-image models

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

19 Sep, 12:41


🧠 آغاز ثبت‌نام رایگان مسابقات بین‌المللی هوش مصنوعی رایان (Rayan) | دانشگاه صنعتی شریف

🪙با بیش از ۳۵ هزار دلار جایزه نقدی
🎓چاپ دستاوردهای ۱۰ تیم برتر در کنفرانس‌‌ها/مجلات برتر بین‌المللی هوش مصنوعی
🗓شروع مسابقه از ۲۶ مهرماه ۱۴۰۳

💬موضوعات مورد بررسی Trustworthiness In Deep Learning:
💬 Model Poisoning
💬 Compositional Generalization
💬 Zero-Shot Anomaly Detection

👀 مسابقات بین‌المللی هوش مصنوعی رایان با حمایت معاونت علمی ریاست‌جمهوری و موضوع Trustworthy AI، توسط دانشگاه صنعتی شریف برگزار می‌گردد. برگزاری این مسابقه در ۳ مرحله (۲ مرحله مجازی و ۱ مرحله حضوری) از تاریخ ۲۶ مهر آغاز می‌شود.

⭐️ رایان جهت حمایت از تیم‌های برتر راه‌یافته به مرحله سوم، ضمن تامین مالی بابت هزینه سفر و اسکان، دستاوردهای علمی تیم‌های برتر را در یکی از کنفرانس‌ها یا مجلات مطرح این حوزه با ذکر نام اعضای تیم در مقاله‌ی مربوطه، چاپ و منتشر خواهد کرد. این شرکت‌کنندگان برای دستیابی به جایزه ۳۵ هزار دلاری برای تیم‌های برتر، در فاز سوم به رقابت می‌پردازند.

👥 تیم‌های شرکت‌کننده، ۲ الی ۴ نفره هستند.

💬 ثبت‌نام کاملاً رایگان تا پایان ۲۵ مهرماه از طریق آدرس زیر:
ai.rayan.global

🌐Linkedin
🌐@Rayan_AI_Contest

RIML Lab

16 Sep, 13:52


🧠 RL Journal Club: This Week's Session

🤝 We invite you to join us for this week's RL Journal Club session, where we will dive into a minimalist approach to offline reinforcement learning. In this session, we will explore how simplifying algorithms can lead to more robust and efficient models in RL, challenging the necessity of complex modifications commonly seen in recent advancements.

This Week's Presentation:

🔹 Title: Revisiting the Minimalist Approach to Offline Reinforcement Learning
🔸 Presenter: Professor Mohammad Hossein Rohban
🌀 Abstract: This presentation will delve into the trade-offs between simplicity and performance in offline RL algorithms. We will review the minimalist approach proposed in the paper, which re-evaluates core algorithmic features and shows that simpler models can achieve performance on par with more intricate methods. The discussion will include experimental results that demonstrate how stripping away complexity can lead to more effective learning, providing fresh insights into the design of RL systems.

The presentation will be based on the following paper:

▪️ Revisiting the Minimalist Approach to Offline Reinforcement Learning (https://arxiv.org/abs/2305.09836)

Session Details:

📅 Date: Tuesday
🕒 Time: 4:00 - 5:00 PM
🌐 Location: Online at https://vc.sharif.edu/ch/rohban
📍 For in-person attendance, please message me on Telegram at @alirezanobakht78

☝️ Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

💯 Join us for an insightful session where we rethink how much complexity is truly necessary for effective offline reinforcement learning! Don't miss this chance to deepen your understanding of RL methodologies.

✌️ We look forward to your participation!
#RLJClub #JClub #RIML #SUT #AI #RL

RIML Lab

14 Sep, 07:58


💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback

🔸 Presenter: Amir Kasaei

🌀 Abstract:
Recent advancements in text-conditioned image generation, particularly through latent diffusion models, have achieved significant progress. However, as text complexity increases, these models often struggle to accurately capture the semantics of prompts, and existing tools like CLIP frequently fail to detect these misalignments.

This presentation introduces a Decompositional-Alignment-Score, which breaks down complex prompts into individual assertions and evaluates their alignment with generated images using a visual question answering (VQA) model. These scores are then combined to produce a final alignment score. Experimental results show this method aligns better with human judgments compared to traditional CLIP and BLIP scores. Moreover, it enables an iterative process that improves text-to-image alignment by 8.7% over previous methods.

This approach not only enhances evaluation but also provides actionable feedback for generating more accurate images from complex textual inputs.

📄 Paper: Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback


Session Details:
- 📅 Date: Sunday
- 🕒 Time: 2:00 - 3:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

RIML Lab

09 Sep, 09:02


🧠 RL Journal Club: This Week's Session

🤝 We invite you to join us for this week's RL Journal Club session, where we will explore the intriguing synergies between Reinforcement Learning (RL) and Large Language Models (LLMs). This session will delve into how these two powerful fields intersect, offering new perspectives and opportunities for advancement in AI research.

This Week's Presentation:

🔹 Title: Synergies Between RL and LLMs
🔸 Presenter: Moein Salimi
🌀 Abstract: In this presentation, we will review research studies that combine Reinforcement Learning (RL) and Large Language Models (LLMs), two domains that have been significantly propelled by deep neural networks. The discussion will center around a novel taxonomy proposed in the paper, categorizing the interaction between RL and LLMs into three main classes: RL4LLM, where RL enhances LLM performance in NLP tasks; LLM4RL, where LLMs assist in training RL models for non-NLP tasks; and RL+LLM, where both models work together within a shared planning framework. The presentation will explore the motivations behind these synergies, their successes, potential challenges, and avenues for future research.

The presentation will be based on the following paper:

▪️ The RL/LLM Taxonomy Tree: Reviewing Synergies Between Reinforcement Learning and Large Language Models (https://arxiv.org/abs/2402.01874)

Session Details:

📅 Date: Tuesday
🕒 Time: 3:30 - 5:00 PM
🌐 Location: Online at https://vc.sharif.edu/ch/rohban
📍 For in-person attendance, please message me on Telegram at @alirezanobakht78

☝️ Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

💯 This session promises to be an enlightening exploration of how RL and LLMs can work together to push the boundaries of AI research. Don’t miss this opportunity to deepen your understanding and engage in thought-provoking discussions!

✌️ We look forward to your participation!

#RLJClub #JClub #RIML #SUT #AI #RL #LLM

RIML Lab

06 Sep, 18:55


💠 Compositional Learning Journal Club

This Week's Presentation:

🔹 Title: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching

🔸 Presenter: Arash Marioriyad

🌀 Abstract:
Diffusion models have achieved significant success in text-to-image generation. However, alleviating the misalignment between text prompts and generated images remains a challenging issue.
This presentation will focus on two observed causes of misalignment: concept ignorance and concept mis-mapping. To address these issues, we will discuss CoMat, an end-to-end diffusion model fine-tuning strategy that uses an image-to-text concept matching mechanism.
Using only 20K text prompts to fine-tune SDXL, CoMat significantly outperforms the baseline SDXL model on two text-to-image alignment benchmarks, achieving state-of-the-art performance.

📄 Paper:
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching

Session Details:
- 📅 Date: Sunday, 8 September 2024
- 🕒 Time: 3:30 - 5:00 PM (GMT+3:30)
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

RIML Lab

04 Sep, 14:16


«دستیاری درس تحلیل هوشمند تصاویر پزشکی»

⭕️ دانشجویانی که تمایل دارند در نیم‌سال آینده (نیم‌سال اول ۱۴۰۳-۰۴) دستیار آموزشی درس تحلیل هوشمند تصاویر پزشکی دکتر رهبان باشند، می‌توانند فرم زیر را پر کنند.

https://docs.google.com/forms/d/e/1FAIpQLSekQsk7e-UavxTfliCGSPpK7-dABoMpsslGgyGPG7F71hyKkw/viewform?usp=sf_link

RIML Lab

29 Aug, 18:26


Compositional Learning Journal Club at RIML Lab 🔥

We are pleased to announce the establishment of a new research group within RIML Lab, dedicated to the study and advancement of Compositional Learning.

Compositional learning is inspired by the inherent human ability to comprehend and generate complex ideas from simpler concepts. By enabling the recombination of learned components, compositional learning enhances a machine's ability to generalize to out-of-distribution samples encountered in real-world scenarios. This characteristic has spurred vibrant research in areas such as object-centric learning, compositional generalization, and compositional reasoning, with wide-ranging applications across various tasks, including controllable text generation, factual knowledge reasoning, image captioning, text-to-image generation, visual reasoning, speech processing, and reinforcement learning.

To promote collaboration and the exchange of knowledge, we are launching a weekly Journal Club. These sessions will be held every Sunday from 3:30 PM to 5:00 PM, where we will engage in discussions on the latest research papers and significant advancements in Compositional Learning.

For updates and additional information, please visit our blog: complearnjc.github.io.

For in-person communication, you may contact us via Telegram at @amirkasaei and @arashmarioriyad.

We look forward to your participation.

RIML Lab

27 Aug, 11:41


به دلیل قطعی برق دانشگاه، این جلسه به هفته آینده موکول شد.

RIML Lab

26 Aug, 08:21


🧠 RL Journal Club: This Week's Session

🤝 We invite you to join us for this week's RL Journal Club session, where we will explore the intriguing synergies between Reinforcement Learning (RL) and Large Language Models (LLMs). This session will delve into how these two powerful fields intersect, offering new perspectives and opportunities for advancement in AI research.

This Week's Presentation:

🔹 Title: Synergies Between RL and LLMs
🔸 Presenter: Moein Salimi
🌀 Abstract: In this presentation, we will review research studies that combine Reinforcement Learning (RL) and Large Language Models (LLMs), two domains that have been significantly propelled by deep neural networks. The discussion will center around a novel taxonomy proposed in the paper, categorizing the interaction between RL and LLMs into three main classes: RL4LLM, where RL enhances LLM performance in NLP tasks; LLM4RL, where LLMs assist in training RL models for non-NLP tasks; and RL+LLM, where both models work together within a shared planning framework. The presentation will explore the motivations behind these synergies, their successes, potential challenges, and avenues for future research.

The presentation will be based on the following paper:

▪️ The RL/LLM Taxonomy Tree: Reviewing Synergies Between Reinforcement Learning and Large Language Models (https://arxiv.org/abs/2402.01874)

Session Details:

📅 Date: Tuesday
🕒 Time: 3:30 - 5:00 PM
🌐 Location: Online at https://vc.sharif.edu/ch/rohban
📍 For in-person attendance, please message me on Telegram at @infinity2357

☝️ Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

💯 This session promises to be an enlightening exploration of how RL and LLMs can work together to push the boundaries of AI research. Don’t miss this opportunity to deepen your understanding and engage in thought-provoking discussions!

✌️ We look forward to your participation!

#RLJClub #JClub #RIML #SUT #AI #RL #LLM

RIML Lab

19 Aug, 16:32


🧠 RL Journal Club: This Week's Session

🤝 We invite you to join us for this week's RL Journal Club session, where we will dive into the fascinating world of Modular Reinforcement Learning. This session will explore the concept of modular RL, an approach that decomposes complex RL tasks into specialized components to enhance scalability and adaptability.

This Week's Presentation:

🔹 Title: Modular Reinforcement Learning
🔸 Presenter: Arash Marioriyad
🌀
Abstract: Modular RL is an approach that emphasizes the decomposition of complex RL-based learning tasks into modular components. This methodology addresses the scalability and adaptability challenges inherent in traditional reinforcement learning by structuring agents as collections of interacting modules, each specialized for specific sub-tasks or aspects of the environment.

The presentation will be based on the following papers:

▪️ Modular Lifelong Reinforcement Learning via Neural Composition (https://arxiv.org/abs/2207.00429)
▪️ Compete and Compose: Learning Independent Mechanisms for Modular World Models (https://arxiv.org/abs/2404.15109)
▪️ Multi-Task Reinforcement Learning with Soft Modularization (https://arxiv.org/abs/2003.13661)
▪️ Modular Multitask Reinforcement Learning with Policy Sketches (https://arxiv.org/abs/1611.01796)
▪️ Recurrent Independent Mechanisms (https://arxiv.org/abs/1909.10893)

Session Details:

📅 Date: Tuesday
🕒 Time: 3:00 - 4:30 PM
🌐 Location: Online at https://vc.sharif.edu/ch/rohban
📍 For in-person attendance, please message me on Telegram at @infinity2357.

☝️ Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

💯 This session promises to be an engaging exploration of how modular approaches can transform the scalability and efficiency of reinforcement learning systems. Don't miss this opportunity to deepen your understanding and participate in thought-provoking discussions!

✌️ We look forward to your participation!

#RLJClub #JClub #RIML #SUT #AI #RL

RIML Lab

12 Aug, 11:10


RL Journal Club: This Week's Session

We are excited to invite you to this week's RL Journal Club session, where we will explore an influential paper in the field of Reinforcement Learning. The session will be presented by our professor, Mohammad Hossein Rohban.

This Week's Presentation Paper:

Title: MOReL: Model-Based Offline Reinforcement Learning
Link: https://arxiv.org/abs/2005.05951
Presenter: Professor Mohammad Hossein Rohban

In this session, we will discuss the MOReL framework, which introduces a model-based approach to offline reinforcement learning, aiming to improve the data efficiency and experimental velocity of RL. The paper explores how a pessimistic MDP can be used to safely and effectively train policies using only historical data, offering a fresh perspective on offline RL.

Session Details:

Date: Tuesday
Time: 3:30 - 5:00 PM
Location: Online at https://vc.sharif.edu/ch/rohban
For in-person attendance, please message me on Telegram at @infinity2357.

Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

We look forward to your participation!

#RLJClub #JClub #RIML #SUT #AI #RL

RIML Lab

06 Aug, 07:39


RL Journal Club: This Week's Session

We are pleased to invite you to this week's RL Journal Club session, where we will dive into another fascinating paper in the field of Reinforcement Learning.

Paper for Discussion:

Title: Three Dogmas of Reinforcement Learning 
Link: https://arxiv.org/abs/2407.10583

Join us as we explore and critically analyze the insights presented in this paper. This session promises to be a thought-provoking discussion, providing an opportunity to deepen your understanding of the fundamental concepts and challenges in RL.

Session Details:

Date: Tuesday
Time: 3:30 - 5:00 PM
Location: Online at https://vc.sharif.edu/ch/rohban
For in-person attendance, please message me on Telegram at @infinity2357.

Note: The discussion is open to everyone, but we can only host students of Sharif University of Technology in person.

We look forward to your participation!

#RLJClub #JClub #RIML #SUT #AI #RL

RIML Lab

29 Jul, 13:33


RL Research Group at RIML: Join Us!

We are excited to announce the formation of a new research group within RIML, dedicated to advancing the field of Reinforcement Learning. If you're passionate about AI and eager to explore the forefront of RL research, this is the perfect opportunity for you.

To foster collaboration and knowledge sharing, we are launching a weekly Journal Club. Every Tuesday from 3:30 to 5:00 PM, we will gather to discuss the latest research papers and breakthroughs in RL. This is a fantastic chance to deepen your understanding, engage in stimulating discussions, and contribute to the growing body of knowledge in this dynamic field.

This Week's Presentation Paper:

Title: On Representation Complexity of Model-based and Model-free Reinforcement Learning
Link: https://arxiv.org/abs/2310.01706
Presenter: Alireza Nobakht

Join us as we delve into the complexities of model-based versus model-free RL approaches.

Session Details:

Time: Tuesdays, 3:30 - 5:00 PM
Location: Online at https://vc.sharif.edu/ch/rohban
For in-person attendance, please message me on Telegram at @infinity2357.

Stay updated and learn more about our activities by visiting our blog: rljclub.github.io.

We look forward to seeing you and embarking on this exciting research journey together!

#RLJClub #JClub #RIML #SUT #AI #RL

RIML Lab

26 Jul, 14:16


📣 توضیحات دکتر رهبان در مورد مسابقات بین المللی RAYAN AI

💬 آقای دکتر رهبان در ابتدای جلسه‌ی دوم دوره آموزشی اول در مورد شیوه‌ی برگزاری مسابقات و مراحل آن و همچنین دوره‌های آموزشی توضیحاتی را ارائه کردند. در سایت آپارات می‌توانید این توضیحات را مشاهده کنید.
🌐 https://www.aparat.com/v/kvj04m9

😌 @Rayan_AI_Course

RIML Lab

14 Jul, 04:41


#اخبار_پژوهشی_آزمایشگاه

چاپ ۲ مقاله همزمان در کنفرانس ECCV 2024 تحت نظارت دکتر رهبان

1. Snuffy: Efficient Universal Approximating Whole Slide Image Classification Framework
تبریک به آقای حسین جعفری‌نیا دانشجوی کارشناسی‌ارشد و خانم نهال میرزایی دانشجوی دکترای آزمایشگاه RIML و علیرضا عالی‌پناه، دانیال حمدی و سعید رضوی دانشجویان کارشناسی آزمایشگاه

2. Deciphering the Role of Representation Disentanglement: Investigating Compositional Generalization in CLIP Models
تبریک به آقای رضا عباسی دانشجوی کارشناسی‌ارشد آزمایشگاه RIML
تحت نظارت دکتر رهبان و دکتر سلیمانی


https://www.linkedin.com/posts/mohammad-hossein-rohban-75567677_eccv-activity-7216325744702992386-cQVq?utm_source=share&utm_medium=member_android

RIML Lab

11 Jul, 11:14


Project Description:
This project is a collaborative effort between Dr. Rohban, Dr. Soleymani, and Dr. Asgari. Together, we aim to push the boundaries of language model evaluation for the Persian language. In this project, our primary goal is to benchmark and develop innovative methods for evaluating language models on the Persian language both robustly and comprehensively. Our approach will encompass both static and dynamic assessments to ensure thorough analysis. This initiative seeks to advance the field by addressing unique challenges posed by Persian language processing.

For more in-depth insights, please refer to the following papers:

"Khayyam Challenge (PersianMMLU): Is Your LLM Truly Wise to The Persian Language?"

Requirments:
Familiarity with LLM Concepts: Understanding the fundamentals and advancements in large language models.
Deep Learning Expertise: Practical knowledge and experience in deep learning techniques.
PyTorch Proficiency: Hands-on experience with the PyTorch framework is essential.
Commitment: Ability to dedicate significant time and maintain consistency throughout the project.


To apply for this position, please read the suggested papers and send your resume along with a brief summary of your research interests to [email protected]. We are eager to hear from motivated individuals who are passionate about advancing language model evaluation.

For any inquiries, feel free to reach out to us via the above email.

#open_position
#research_application

RIML Lab

07 Jul, 10:57


#اخبار_پژوهشی_آزمایشگاه

مقالات برتر چاپ شده از آغاز سال ۲۰۲۳ تحت نظارت آقای دکتر رهبان

Fake It Until You Make It: Towards Accurate Near-Distribution Novelty Detection
آقای حسین میرزایی دانشجوی کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس ICLR

Lagrangian objective function leads to improved unforeseen attack generalization
آقای محمد عزیزملایری دانشجوی دکترا آزمایشگاه RIML - چاپ شده در ژورنال Machine Learning

Compositions and methods for treating proliferative diseases
US Patent App.

Zerograd: Costless conscious remedies for catastrophic overfitting in the fgsm adversarial training
خانم زینب گلگونی دانشجوی دکترا آزمایشگاه RIML - چاپ شده در ژورنال Intelligent Systems with Applications

A deep learning framework to scale linear facial measurements to actual size using horizontal visible iris diameter: a study on an Iranian population
آقای دکتر حسین محمدرحیمی محقق در آزمایشگاه RIML - چاپ شده در ژورنال Scientific Reports

Weakly-Supervised Drug Efficiency Estimation with Confidence Score: Application to COVID-19 Drug Discovery
خانم نهال میرزایی و آقای محمد ولی‌ثانیان دانشجویان کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس MICCAI

Forecasting influenza hemagglutinin mutations through the lens of anomaly detection
آقای محمدرضا صالحی دانشجوی اسبق آزمایشگاه RIML - چاپ شده در ژورنال Scientific Reports

Borderless azerbaijani processing: Linguistic resources and a transformer-based approach for azerbaijani transliteration
خانم ریحانه زهرابی دانشجوی کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس ACL - تحت نظر دکتر بیگی، دکتر عسگری و دکتر رهبان

Examination of lemon bruising using different CNN-based classifiers and local spectral-spatial hyperspectral imaging
آقای دکتر سجاد سبزی پسادکترا آزمایشگاه RIML - چاپ شده در ژورنال Algorithms

A Robust Heterogeneous Offloading Setup Using Adversarial Training
آقای مهدی امیری دانشجوی کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در ژورنال IEEE Transactions on Mobile Computing - تحت نظر دکتر رهبان و دکتر حسابی

Universal Novelty Detection Through Adaptive Contrastive Learning
آقای حسین میرزایی و آقای مجتبی نافذ دانشجویان کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس CVPR

Killing It With Zero-Shot: Adversarially Robust Novelty Detection
آقای حسین میرزایی دانشجوی کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس IEEE ICASSP

User Voices, Platform Choices: Social Media Policy Puzzle with Decentralization Salt
جمعی از دانشجویان کارشناسی - چاپ شده در کنفرانس CHI

Comparison of 2D and 3D convolutional neural networks in hyperspectral image analysis of fruits applied to orange bruise detection
آقای دکتر سجاد سبزی پسادکترا و خانم ریحانه زهرابی دانشجوی کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در Journal of Food Science

RODEO: Robust Outlier Detection via Exposing Adaptive Outliers
آقای حسین میرزایی و آقای مجتبی نافذ دانشجویان کارشناسی‌ارشد آزمایشگاه RIML - چاپ شده در کنفرانس ICML

Virtual screening for small-molecule pathway regulators by image-profile matching
آقای دکتر رهبان و خانم دکتر Anne E. Carpenter - چاپ شده در ژورنال Cell systems

Khayyam Challenge (PersianMMLU): Is Your LLM Truly Wise to The Persian Language?
Coming Soon! :)

و ده‌ها مقاله دیگر که در Google Scholar دکتر رهبان می‌توانید مشاهده کنید.

تبریک خدمت تمامی اعضای آزمایشگاه به دلیل تلاش‌‌، کوشش و پژوهش در جهت رفع مشکلات جامعه و کشور و چاپ مقالات در برترین کنفرانس‌ها و مجلات AI

RIML Lab

05 Jul, 10:02


با سلام خدمت همه دوستان
رویداد رایان با هدایت دکتر رهبان به زودی برگزار خواهد شد. تیم علمی رایان به دنبال گسترش و همکاری با دانشجویان گرامی علاقه‌مند می‌باشد. مسولیت‌های اصلی اعضای علمی شامل هندل کردن دوره آموزشی (e.g. تولید محتوا/تمرین) و همچنین طراحی چالش (e.g. پیاده سازی ایده ها و شرکت در طراحی مسئله) می باشد. مزایای عضویت در تیم علمی شامل دریافت حقوق و ریکام (در صورت تایید هد تیم علمی) می‌باشد. شما عزیزان می‌توانید با پرکردن این فرم علاقه‌مندی خود را برای عضویت در تیم علمی اعلام بفرمایید. شایان ذکر است که از اعضای تیم انتظار می‌رود که هفته‌ای حداقل ۱۵ ساعت زمان به رویداد اختصاص بدهند. مهلت پرکردن این فرم تا فردا خواهد بود.