Scientifically (@scientificallly) के नवीनतम पोस्ट टेलीग्राम पर

Scientifically टेलीग्राम पोस्ट

Scientifically
Discover the world!
3,652 सदस्य
428 तस्वीरें
20 वीडियो
अंतिम अपडेट 07.03.2025 03:06

समान चैनल

Tariku Abera
5,113 सदस्य
Motivation Malayalam
2,658 सदस्य
GLOBAL GAS TECHNOLOGY
1,425 सदस्य

Scientifically द्वारा टेलीग्राम पर साझा की गई नवीनतम सामग्री

Scientifically

23 Jan, 19:10

659

OpenAI has introduced Operator, its first fully functional AI agent.

The agent can seamlessly use a browser to perform tasks such as booking tickets, ordering food, reserving tables, and more. The system operates as a separate site on a ChatGPT subdomain, featuring a regular chat interface alongside a browser window. This browser is streamed simultaneously to both the user and the AI agent. Users can take control at any time, and for sensitive actions like payments, manual user intervention is mandatory.

This reminds me of the startup Mighty, which initially developed a cloud-based browser but pivoted to image generation a couple of years ago (now known as Playground). Mighty went through Y Combinator back when Sam Altman was chair of its board of directors, so it's possible OpenAI acquired some of their intellectual property.

The system is powered by CUA (Computer-Using Agent), a new fine-tuned version of GPT-4, which combines reasoning with image understanding. It surpasses Sonnet 3.6 (2024-10-22) in tasks involving computer use. However, OpenAI avoids direct comparisons with Google's equivalent model, likely because the performance gap is smaller. Notably, OpenAI’s presentations are increasingly reminiscent of Apple’s style—they refer to the previous model simply as "Previous SOTA" in their tables, with its name (Sonnet 3.6) only appearing in the footnotes.

While Anthropic and Google demonstrated similar capabilities months earlier, OpenAI was the first to launch a consumer-facing product, highlighting their differing priorities. Operator is already rolling out to Pro users (by the way, did you know the Pro subscription is running at a loss?). Access through the Plus plan and API is expected within a few weeks.

For now, you can access Operator at operator.chatgpt.com (available to Pro users in the US)
Scientifically

10 Jan, 09:45

699

The team behind Qwen has launched their chatbot.

In addition to open-source models from all the Qwen model families, they also offer proprietary MoE (Mixture of Experts) models. Their flagship general-purpose model is Qwen2.5-Plus, while Qwen2.5-Turbo is their long-context model, capable of handling up to 1 million tokens of context. There's also Qwen2-VL-Max, which seems to be just Qwen2-VL 72B—though that's not confirmed.

Feature-wise, it's quite solid for an early release. It includes artifacts, document uploads, and image input capabilities. A standout feature, which I haven’t seen outside of Chatbot Arena, is the ability to send the same prompt to multiple models (up to three) simultaneously. However, this feature is still rough around the edges—you can't seamlessly continue the conversation with just one of the models afterward, which their interface doesn’t currently support.

Coming soon, the chatbot is expected to integrate search and image generation capabilities. It will be interesting to see whether they’ll use FLUX again or develop their own solutions. We'll have to wait and see.

The service is entirely free, similar to Mistral and DeepSeek. Their goal isn’t to profit from subscriptions but to promote their API and gather additional fine-tuning data. For those concerned about privacy, Anthropic’s Claude remains the only option where chat data isn’t used for training.

Check it out at chat.qwenlm.ai
Scientifically

31 Dec, 06:48

739

Dear Respected Scientists,

It is with great honor and admiration that I extend my heartfelt congratulations to each and every one of you. Your dedication, curiosity, and relentless pursuit of knowledge continue to inspire not only your peers but also future generations who dream of following in your footsteps.

Together, you have achieved remarkable milestones, tackled complex challenges, and expanded the boundaries of human understanding. Your contributions are shaping the world and providing solutions to issues that touch every aspect of our lives.

As we celebrate your collective accomplishments, let us also look to the future with renewed energy and optimism. The work you do today lays the foundation for discoveries that will transform tomorrow.

Once again, congratulations on all you have achieved. May your passion for science and commitment to excellence continue to drive progress and innovation.

With deepest respect and warmest regards, Scientifically
Scientifically

28 Dec, 13:28

729

Artificial Intelligence is no longer something from science fiction. It’s everywhere – from the apps on our phones to the advanced systems running factories and hospitals.

First, AI makes our daily lives easier. Think about navigation apps, voice assistants like Siri, or recommendations on Netflix. These are all powered by AI, helping us save time and make better choices.

Second, AI is transforming industries. In healthcare, it helps doctors diagnose diseases faster. In business, it analyzes huge amounts of data to make smarter decisions. Even in education, AI creates personalized learning experiences for students.

But it’s not just about convenience. AI also brings challenges, like questions about jobs, ethics, and privacy. It’s important for us to understand these issues and find the right balance.

AI is a tool – how we use it will shape our future. Let’s make sure it benefits everyone.
Scientifically

29 Oct, 12:29

1,356

If you've ever wondered how models like ChatGPT are built, this lecture provides a fantastic deep dive into the process! Delivered by Yann Dubois at Stanford's CS229: Machine Learning course in the summer of 2024, the lecture breaks down each step in creating a ChatGPT-like model, covering both the pretraining phase (language modeling) and post-training techniques (like SFT and RLHF). For each component, Yann goes over the essential practices for data collection, algorithms, and evaluation, giving listeners an inside look at the latest methods in the field.

About the Speaker

Yann Dubois is a fourth-year PhD student in computer science at Stanford, working under renowned advisors Percy Liang and Tatsu Hashimoto. His research focuses on enhancing AI performance, especially when resources are limited. Recently, Yann joined the Alpaca team, where he's been experimenting with more efficient ways to train and evaluate language models by leveraging other LLMs.


https://youtu.be/9vM4p9NN0Ts?feature=shared
Scientifically

28 Oct, 19:34

966

Tokyo University of Science researchers developed a binarized neural network (BNN) using a new ternary gradient system to improve AI in edge IoT devices. This novel approach incorporates magnetic RAM and in-memory computing, reducing the need for cloud connectivity and increasing efficiency in energy-constrained devices. Their algorithm achieved high accuracy while maintaining the compactness required for IoT applications, especially for wearables and smart home tech. This design may lead to faster and more sustainable AI processing on limited-power IoT devices.

For more: https://www.computerweekly.com/news/366614778/Tokyo-University-of-Science-sets-pace-in-neural-networks-on-edge-IoT
Scientifically

27 Oct, 04:32

844

Choosing the best AI platform can feel overwhelming, but this chart makes it easier to see where each option stands in terms of speed, responsiveness, and cost.

In the green-shaded "most attractive quadrant," you can find platforms that offer a strong balance of fast response times and high output speed. Cerebras stands out as one of the fastest options here — it’s designed for quick AI responses, which makes it ideal for things like real-time chatbots or interactive applications. However, it comes at a higher cost, as shown by its larger size on the chart.

For a more budget-friendly approach, Google Vertex and Perplexity seem like good middle-ground options. They offer decent speed and responsiveness without the steep cost, so they’re better suited for projects where a balance of performance and cost matters.

Overall, this chart shows the trade-offs between AI speed, latency, and price, helping anyone get a sense of what might work best for their needs, from high-speed applications to cost-effective solutions.
Scientifically

26 Oct, 16:11

676

The article emphasizes the need to make teaching a more attractive profession to address the global shortage of teachers. It suggests that pay, working conditions, and the value placed on teachers by society play a crucial role in recruitment and retention. Quick fixes like bursaries and performance bonuses are ineffective. Instead, the focus should shift to raising the status of teaching, increasing resources, and improving student behavior to attract more graduates to the field. Countries like Finland and Singapore, where teachers are more respected, face fewer shortages.

For more, you can read here:
https://www.scimex.org/newsfeed/how-do-we-get-more-teachers-in-schools
Scientifically

25 Oct, 17:11

635

At first glance, this diagram looks like a roadmap to the entire universe of numbers, a beautifully layered structure that reveals how different number sets relate to one another. It’s like peeling back the layers of mathematics, starting from the most basic building blocks and working your way up to the more complex, abstract concepts that shape higher-level math.

At the base of this number hierarchy, we see natural numbers (N) — the counting numbers that we first encounter in childhood, like 1, 2, and 3. As you move outward, you see the familiar integers (Z), which expand the natural numbers by including negatives, like -1, -2, -3. Then come the rational numbers (Q), which fill in the gaps with fractions, such as ½ and -2/3.

But mathematics doesn't stop at the rationals. There are irrational numbers, like √2, that can’t be expressed as simple fractions but still fall within the realm of real numbers (R) — numbers we can locate on the number line. The transcendental numbers, like π and e, push things even further, representing numbers that transcend algebraic equations.

Then we get into more intriguing territory: imaginary numbers, which involve i, the square root of -1. Suddenly, we’re in the world of complex numbers (C), where numbers combine both real and imaginary components, like 1 + i or πi. This is where algebra, geometry, and complex analysis come together.

This diagram isn't just a classification system—it's a snapshot of mathematical elegance. Each category builds on the previous one, showing how mathematics flows from the simplest natural numbers all the way to the intricate, mind-bending realm of complex numbers. It's a reminder that math is both deeply structured and infinitely expansive, a system with layers to uncover and explore.
Scientifically

25 Oct, 09:56

598

Nearly 15 years ago, a small JetBrains engineering team took on an ambitious challenge: creating a new programming language to stand alongside industry giants. At the time, Java dominated the tech landscape, powering millions of projects but showing signs of stagnation, with few significant updates and a lack of modern features. Engineers everywhere were eager for fresh solutions.

Various developers tried to reshape the JVM ecosystem with new languages, sensing a fleeting opportunity to build something transformative. Kotlin was born out of this drive for change. But what factors fueled Kotlin’s rise, and what challenges did its creators face to secure its place in the tech world? This documentary tells the story straight from the innovators themselves.

https://youtu.be/E8CtE7qTb-Q?feature=shared