CRITICAL//Consensus @breakthebrokenconsensus Channel on Telegram

CRITICAL//Consensus

@breakthebrokenconsensus


#Accelerationism #AI #Anime #CCRU #CAS #Complexity #Cyberpunk #eBooks #NeoReaction #NickLand #NRx #Memes #Moldbug #News #Philosophy #Politics #TransHumanism #Religion #Tech #TheoryFiction #UKPOL

"Can what is playing you make it to level-2?"

CRITICAL//Consensus (English)

Are you ready to break free from the broken consensus? Look no further than the CRITICAL//Consensus Telegram channel, also known as @breakthebrokenconsensus. This channel is a hub for all things related to Accelerationism, AI, Anime, CCRU, CAS, Complexity, Cyberpunk, eBooks, NeoReaction, Nick Land, NRx, Memes, Moldbug, News, Philosophy, Politics, TransHumanism, Religion, Tech, Theory Fiction, and UKPOL. If you are interested in exploring thought-provoking discussions and cutting-edge ideas in areas such as technology, philosophy, politics, and more, this channel is the perfect place for you. With a quote that challenges you to reach 'level-2,' CRITICAL//Consensus is a platform for individuals who are curious, innovative, and eager to delve into the depths of critical thinking and consensus-breaking concepts. Join us today and immerse yourself in a world of stimulating content and engaging conversations that will expand your horizons and challenge your perspectives. Together, let's push the boundaries of what is possible and navigate the complexities of our rapidly changing world.

CRITICAL//Consensus

21 Jan, 18:48


Links for 2025-01-21

AI:

1. Kimi k1.5: Scaling Reinforcement Learning with LLMs — An o1-level multi-modal model https://github.com/MoonshotAI/Kimi-k1.5/blob/main/Kimi_k1.5.pdf

2. Evolutionary search works outside of formally verifiable domains https://arxiv.org/abs/2501.09891

3. Self-playing Adversarial Language Game Enhances LLM Reasoning https://arxiv.org/abs/2404.10642

4. Inference Magazine — a new publication on AI progress. https://inferencemagazine.substack.com/

5. Five Recent AI Tutoring Studies https://www.lesswrong.com/posts/bs3yj8vLDKNnoa95m/five-recent-ai-tutoring-studies

6. DeepMind Expects Clinical Trials for AI-Designed Drugs This Year https://www.bloomberg.com/news/articles/2025-01-21/deepmind-expects-clinical-trials-for-ai-designed-drugs-this-year [no paywall: https://archive.is/EmZWZ]

7. Anthropic CEO Says AI Could Surpass Human Intelligence by 2027 https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-21-2025/card/anthropic-ceo-says-ai-could-surpass-human-intelligence-by-2027-9tka9tjLKLalkXX8IgKA [no paywall: https://archive.is/n6Vya]

8. Diving into the Underlying Rules or Abstractions in o3's 34 ARC-AGI Failures https://substack.com/home/post/p-154931348

9. Infrastructure for AI Agents https://arxiv.org/abs/2501.10114

10. “In chain-of-thought, we're collapsing that beautiful many-dimensional vector and all its semantic meaning down into a single token after every forward pass...Why not use a continuous latent space for reasoning?” https://www.lesswrong.com/posts/D2Aa25eaEhdBNeEEy/worries-about-latent-reasoning-in-llms

11. GameFactory: Creating New Games with Generative Interactive Videos https://arxiv.org/abs/2501.08325

12. Do generative video models learn physical principles from watching videos? https://arxiv.org/abs/2501.09038

13. Inference Scaling and the Log-x Chart https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart

14. OpenAI was actually the major (it seems) funder of the FrontierMath benchmark https://www.lesswrong.com/posts/8ZgLYwBmB3vLavjKE/broader-implications-of-the-frontiermath-debacle

Brains and AI:

1. Humans and AI systems end up representing some stuff in remarkably similar ways https://www.biorxiv.org/content/10.1101/2024.12.26.629294v1

2. In some ways, the brain and language models process language in surprisingly similar ways https://www.nature.com/articles/s41467-024-46631-y

3. Predicting Human Brain States with Transformer https://arxiv.org/abs/2412.19814

4. Can AI Models Show Us How People Learn? Impossible Languages Point a Way. https://www.quantamagazine.org/can-ai-models-show-us-how-people-learn-impossible-languages-point-a-way-20250113/

Tech:

1. This MIT spinout wants to spool hair-thin fibers into patients’ brains https://techcrunch.com/2025/01/16/this-mit-spinout-wants-to-spool-hair-thin-fibers-into-patients-brains/

2. How to trick the immune system into attacking tumours https://www.nature.com/articles/d41586-025-00126-y [no paywall: https://archive.is/C8Fmt]

3. MIT’s Latest Bug Robot Is a Super Flyer. It Could One Day Help Bees Pollinate Crops. https://singularityhub.com/2025/01/17/mits-latest-bug-robot-is-a-super-flyer-it-could-one-day-help-bees-pollinate-crops/

4. A BCI controlling virtual fingers is used to fly a high-performance virtual quadcopter. https://www.nature.com/articles/s41591-024-03341-8

Math:

1. A Lottery Drawing Included Four Consecutive Numbers. What Are the Odds? https://www.nytimes.com/2024/12/19/us/lottery-mega-millions-four-numbers.html [no paywall: https://archive.is/viwjX]

2. The Wednesday Sleeping Beauty Problem https://www.umsu.de/blog/2025/813

3. Probability theory as a physical theory points to superdeterminism https://arxiv.org/abs/1811.10992

Short SF:

1. The Gentle Romance, a story about living through the transition to utopia. https://press.asimov.com/articles/gentle-romance

2. Why-chain https://aleph.se/andart2/fiction/why-chain/

Psychology:

1. IQ and Taleb's Wild Ride https://hereticalinsights.substack.com/p/iq-and-talebs-wild-ride

CRITICAL//Consensus

16 Jan, 10:37


Links for 2025-01-15

AI:

1. Google presents the successor to the Transformer architecture: Titans marks a significant step in neural network architecture by integrating a bio-inspired long-term memory mechanism that complements the short-term context modeling of traditional attention mechanisms. A key innovation is that the memory module is trained to learn how to memorize and forget during test time. This allows the model to adapt to new, unseen data distributions, which is crucial for real-world applications. The way Titans decides what to memorize is inspired by how the human brain prioritizes surprising or unexpected events. The authors introduce the concept of "momentary surprise" (how much a new input deviates from the model's current understanding) and "past surprise" (a decaying record of past surprises) to guide the memory module's updates. This mirrors the human tendency to remember events that stand out from the norm. https://arxiv.org/abs/2501.00663

2. Transformer^2: Self-adaptive LLMs — dynamically adapts to new tasks in real-time, using smart "expert" vectors to fine-tune performance. https://sakana.ai/transformer-squared/

3. Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains https://llm-multiagent-ft.github.io/

4. Imagine while Reasoning in Space: Multimodal Visualization-of-Thought — MVoT moves beyond Chain-of-Thought (CoT) to enable AI to imagine what it thinks with generated visual images. By blending verbal and visual reasoning, MVoT makes tackling complex problems more intuitive, interpretable, and powerful. https://arxiv.org/abs/2501.07542

5. VideoRAG: A framework that enhances RAG by leveraging video content as an external knowledge source. https://arxiv.org/abs/2501.05874

6. O1 Replication Journey -- Part 3: Inference-time Scaling for Medical Reasoning https://arxiv.org/abs/2501.06458

7. The Lessons of Developing Process Reward Models in Mathematical Reasoning https://arxiv.org/abs/2501.07301

8. Exploring the Potential of Large Concept Models https://arxiv.org/abs/2501.05487

9. UC Berkeley releases a $450 open-source reasoning model that matches o1-preview https://novasky-ai.github.io/posts/sky-t1/

11. MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training https://zju3dv.github.io/MatchAnything/

AI economics:

2. “…even though standard measures of AI quality scale poorly as a function of resources, the financial returns might still scale very well as a function of resources. Indeed, if they scale better than linearly, that would create a paradigm of increasing marginal returns…” https://www.tobyord.com/writing/the-scaling-paradox

3. Applying traditional economic thinking to AGI: a trilemma https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-traditional-economic-thinking-to-agi-a-trilemma

Bio(tech):

1. Nanocarrier imaging at single-cell resolution across entire mouse bodies with deep learning https://www.nature.com/articles/s41587-024-02528-1

2. New computational chemistry techniques accelerate the prediction of molecules and materials https://news.mit.edu/2025/new-computational-chemistry-techniques-accelerate-prediction-molecules-materials-0114

3. ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning https://arxiv.org/abs/2501.06590

4. About 5% of cyanobacteria fished from the ocean are connected via nanotubes. https://www.quantamagazine.org/the-ocean-teems-with-networks-of-interconnected-bacteria-20250106/

5. The use of genetically engineered bacteria to recover or recycle chemicals and turn them into useful products is progressing fast https://www.bbc.com/news/articles/cz6pje1z5dqo

6. Heritability: what is it, what do we know about it, and how we should think about it? https://www.lesswrong.com/posts/xXtDCeYLBR88QWebJ/heritability-five-battles

7. Synchron to Advance Implantable Brain-Computer Interface Technology with NVIDIA Holoscan https://www.businesswire.com/news/home/20250113376337/en/Synchron-to-Advance-Implantable-Brain-Computer-Interface-Technology-with-NVIDIA-Holoscan

CRITICAL//Consensus

16 Jan, 10:36


Akira (1988)

CRITICAL//Consensus

16 Jan, 10:34


Links for 2025-01-13

AI:

1. MIDAS speeds up language model training by up to 40%. While MIDAS-trained models may have similar or slightly worse perplexity compared to traditional training methods, they perform significantly better on downstream reasoning tasks. https://arxiv.org/abs/2409.19044

2. Building AI Research Fleets https://www.lesswrong.com/posts/WJ7y8S9WdKRvrzJmR/building-ai-research-fleets

3. Superhuman forecaster seems reachable in 2025 https://arxiv.org/abs/2412.18544

4. Training Transformers for simple next token prediction on videos leads to competitive performance across all benchmarks. https://arxiv.org/abs/2501.05453

5. Optimizing LLM Test-Time Compute Involves Solving a Meta-RL Problem https://blog.ml.cmu.edu/2025/01/08/optimizing-llm-test-time-compute-involves-solving-a-meta-rl-problem/

6. Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning https://arxiv.org/abs/2412.15797

7. Creating a LLM-as-a-Judge That Drives Business Results https://hamel.dev/blog/posts/llm-judge/

8. Grokking at the Edge of Numerical Stability https://arxiv.org/abs/2501.04697

9. Can transformers be scaled up to AGI? Ilya Sutskever: Obviously, yes https://youtu.be/Ft0gTO2K85A?si=ab3ADAzLoUr4n5Ns&t=1680

10. Web UI for interacting with Qwen (Alibaba) models including their reasoning model https://chat.qwenlm.ai/

AI politics:

1. What would happen if remote work were fully automated? Matthew Barnett argues the economic impact would be massive—with the economy doubling in size even in the most conservative scenario. https://epoch.ai/gradient-updates/consequences-of-automating-remote-work

2. Once robots can do physical jobs, how quickly could they scale up? Converting car factories might produce 1 billion robots annually in under 5 years. Here are some maths for rapid robot deployment. https://www.lesswrong.com/posts/6Jo4oCzPuXYgmB45q/how-quickly-could-robots-scale-up

3. David Dalrymple on Safeguarded, Transformative AI https://www.youtube.com/watch?v=MPrU69sFQiE

4. Human takeover might be worse than AI takeover https://www.lesswrong.com/posts/FEcw6JQ8surwxvRfr/human-takeover-might-be-worse-than-ai-takeover

5. NVIDIA CEO Jensen Huang: "the critical technologies necessary to build general humanoid robotics is just around the corner" and an aging population and declining birthrate makes this imperative as the world needs more workers https://youtu.be/Z_DR1_zhmCU?si=3-yePRXlqzQtTHeX&t=65

Health:

1. “Yet more evidence that Alzheimer's is caused by human herpesvirus variants. The HHV family of viruses is almost certainly responsible for a very wide variety of horrifying human illnesses. (EBV, for example, is the root cause of Multiple Sclerosis.)” (via Perry E. Metzger) https://www.science.org/doi/10.1126/scisignal.ado6430

2. Heritable polygenic editing: the next frontier in genomic medicine? Very large potential gains in long-term health from completely removing certain bad alleles present in our collective gene pool. https://www.nature.com/articles/s41586-024-08300-4

Psychology:

1. Are the average genetic scores for intelligence decreasing between birth cohorts? https://www.emilkirkegaard.com/p/dysgenics-within-and-between

2. New study finds enhanced creativity in autistic adults is linked to co-occurring ADHD rather than autism itself (N=352). https://psycnet.apa.org/fulltext/2025-66159-001.html

Computer science:

1. “Above my pay grade: Jensen Huang and the quantum computing stock market crash” https://scottaaronson.blog/?p=8567

2.“The single axiom ((a•b)•c)•(a•((a•c)•a))=c is a complete axiom system for Boolean algebra” https://writings.stephenwolfram.com/2025/01/who-can-understand-the-proof-a-window-on-formalized-mathematics/

3. The purposeful drunkard https://www.lesswrong.com/posts/s39XbvtzzmusHxgky/the-purposeful-drunkard

4. The Dilithium implementation in Google and Microsoft's Caliptra root of trust just got hacked by measuring the switching power consumption of internal pipeline registers to extract keys https://eprint.iacr.org/2025/009.pdf

CRITICAL//Consensus

16 Jan, 10:29


Links for 2025-01-09

AI:

1. Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought https://arxiv.org/abs/2501.04682

2. Inference time MCTS can boost Qwen *1.5B* to o1-preview levels on MATH and AIME '24. Microsoft presents rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking https://arxiv.org/abs/2501.04519

3. Smaller, Weaker, Yet Better: “Our findings reveal that models fine-tuned on weaker & cheaper generated data consistently outperform those trained on stronger & more-expensive generated data across multiple benchmarks…” https://arxiv.org/abs/2408.16737

4. Agent Laboratory: Using LLM Agents as Research Assistants https://agentlaboratory.github.io/

5. AI unveils strange chip designs, while discovering new functionalities https://www.nature.com/articles/s41467-024-54178-1

6. A foundation model of transcription across human cell types https://www.nature.com/articles/s41586-024-08391-z

7. AI helps doctors detect more breast cancer in the largest real-world study https://www.nature.com/articles/s41591-024-03408-6

8. Ovarian cancer diagnosis improved by AI https://healthcare-in-europe.com/en/news/ovarian-cancer-diagnosis-ai.html

9. Microsoft CEO Satya Nadella: "we fundamentally believe the scaling laws are absolutely still great and will work and continue to work" https://www.youtube.com/live/bYgP-tC5BFU?si=kASx6DIOk6ioBpUl&t=932

10. Beyond Sight Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding https://fuse-model.github.io/

11. NVIDIA Redefines Game AI With ACE Autonomous Game Characters https://www.nvidia.com/en-us/geforce/news/nvidia-ace-autonomous-ai-companions-pubg-naraka-bladepoint/

12. While everyday users still encounter hallucinating chatbots and the media declares an AI slowdown, behind the scenes, AI is rapidly advancing in technical domains. https://time.com/7205359/why-ai-progress-is-increasingly-invisible/

13.Large language models for artificial general intelligence (AGI): A survey of foundational principles and approaches https://arxiv.org/abs/2501.03151v1

14. Google is building its own ‘world modeling’ AI team for games and robot training https://www.theverge.com/2025/1/7/24338053/google-deepmind-world-modeling-ai-team-gaming-robot-training

15. Tyler Cowen - The #1 Bottleneck to AI progress Is Humans https://www.dwarkeshpatel.com/p/tyler-cowen-4

Biotech:

1. “What I saw would cause any biotech leader to sit up and take notice. I saw science parks many multiples larger than Kendall Square or South SF, filled with startups. Integrated biology, chemistry, biochem and structural biology, and vivarium labs were running at scale. Even smaller biotechs were running vivariums processing tens of thousands of in vivo mouse experiments monthly. Programs which went from standing start to registering for human clinical trials within 18 months(!) were not uncommon.” https://timmermanreport.com/2025/01/china-is-here-to-stay-as-a-leader-on-the-global-biotech-stage/

2. Sana Biotechnology Announces Positive Clinical Results from Type 1 Diabetes Study of Islet Cell Transplantation Without Immunosuppression https://www.globenewswire.com/news-release/2025/01/07/3005841/0/en/Sana-Biotechnology-Announces-Positive-Clinical-Results-from-Type-1-Diabetes-Study-of-Islet-Cell-Transplantation-Without-Immunosuppression.html

Miscellaneous:

1. Particle that only has mass when moving in one direction observed for first time https://www.psu.edu/news/research/story/particle-only-has-mass-when-moving-one-direction-observed-first-time

2. On Eating the Sun https://www.lesswrong.com/posts/6Fo8fjvpL7pwCTz3t/on-eating-the-sun

3. NVIDIA Announces World’s Smallest AI Supercomputer https://www.nvidia.com/en-us/project-digits/

4. New study says the Romans suffered "widespread cognitive decline including an estimated 2.5-to-3 point reduction" in IQ in their European territories because of industrialized silver mining releasing lead into the atmosphere https://pnas.org/doi/10.1073/pnas.2419630121

CRITICAL//Consensus

12 Jan, 07:20


🚨During her interview with Nick Land, Talk Tuah's Hailey Welch recently went viral with her rant on reterritorialising hyperstional cybergothic allegory within accelerationism.🚨

HW: "The tacit aim in the work of the CCRU was an attempt to find a place for human agency once the motor of transformation that drives modernity is understood to be inhuman and indeed indifferent to the human. The attempt to participate vicariously in its positive feedback loop by fictioning or even mimicking it can be understood as an answer to this dilemma. The conspicuous fact that, shunned by the mainstream of both the 'continental philosphy' and cultural studies disciplines which it hybridized, the Cyberculture material had more subterranean influence on musicians, artists, and fiction writers than on traditional forms of political theory or action, indicated how its stance proved more appropriable as an aesthetic than effective as a political force."

Land: "Wow."

CRITICAL//Consensus

09 Jan, 12:35


Links for 2025-01-07

AI:

1. Language agents backed by open-source, non-frontier LLMs can match and exceed both frontier LLM agents and human experts on multiple scientific tasks at up to 100x lower inference cost. https://arxiv.org/abs/2412.21154

2. A LLM agent for multi-agent settings that generates hypotheses about other agents' latent states in natural language, adapting to diverse agents across collaborative, competitive, and mixed-motive domains https://arxiv.org/abs/2407.07086

3. Microsoft's Charles Lamanna: "By this time next year, you'll have a team of [AI] agents working for you" https://www.fastcompany.com/91254053/25-experts-predict-how-ai-will-change-business-and-life-in-2025

4. Google demonstrates scalability of AI agents, enabling complex workflow automation. https://www.kaggle.com/whitepaper-agents

5. "AI agents is likely to be a multi-trillion dollar opportunity"— Jensen Huang, Nvidia CEO https://www.youtube.com/live/k82RwXqZHY8?si=lVdJlwg51hKwIyU_&t=2638

6. Given sufficient context, LLMs can suddenly shift from their concept representations to 'in-context representations' that align with the task structure https://arxiv.org/abs/2501.00070

7. B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners https://arxiv.org/abs/2412.17256

8. Transformer takes 22 seconds of brain state vectors, and outputs the next 5 seconds of human neural activity with decent accuracy. https://www.arxiv.org/abs/2412.19814

9. Generating All-Atom Protein Structure from Sequence-Only Training Data https://amyxlu.github.io/plaid/

10. A metagenomic foundation model to help detect and prevent the next pandemic early https://metagene.ai/metagene-1-paper.pdf

11. Turn smartphones into pocket laboratories for farmers https://ai.meta.com/blog/inarix-agricultural-supply-chain-meta-dino-v2/

12. Self-driving tractors and trucks https://www.theverge.com/2025/1/6/24334357/john-deere-autonomous-tractor-truck-orchard-mow-ces

13. NVIDIA Cosmos: World Foundation Model Platform for Physical AI https://github.com/NVIDIA/Cosmos

14. Google: "we believe scaling on video and multimodal data is on the critical path to artificial general intelligence" https://techcrunch.com/2025/01/06/google-is-forming-a-new-team-to-build-ai-that-can-simulate-the-physical-world/

15. Test-time Computing: from System-1 Thinking to System-2 Thinking https://arxiv.org/abs/2501.02497

Health:

1. The mission to restore sight takes a big leap forward! 🚀 🌟 Large-scale RF mapping without visual input for neuroprostheses https://www.medrxiv.org/content/10.1101/2024.12.22.24319047v1.full

2. Transplanting young Hematopoietic Stem Cells into old mice rejuvenates their blood's epigenetic age and boosts physical performance https://www.nature.com/articles/s41422-024-01057-5

Tech:

1. Why China Is Building a Thorium Molten-Salt Reactor https://spectrum.ieee.org/chinas-thorium-molten-salt-reactor

2. Storing Thousands of Terabytes in a Single Gram of DNA https://www.nature.com/articles/s41586-024-08040-5

Math:

1. “I have two children, at least one of whom is a boy born on a day that I'll tell you in 5 minutes. What is the chance that both are boys, and what will the chance be after I tell you the day?” https://www.lesswrong.com/posts/7i4qTDCxf5QBYWqvg/practicing-bayesian-epistemology-with-two-boys-probability

2. How can it be feasible to find proofs? https://drive.google.com/file/d/1-FFa6nMVg18m1zPtoAQrFalwpx2YaGK4/view

Science:

1. Any living creature in our universe, natural or artificial, will inevitably view light's speed in empty space as extremely quick. https://profmattstrassler.com/2024/10/03/why-is-the-speed-of-light-so-fast-part-2/

2. “The penguins nodded off >10,000 times per day, engaging in bouts of bihemispheric and unihemispheric slow-wave sleep lasting on average only 4 seconds, but resulting in the accumulation of >11 hours of sleep for each hemisphere.” https://www.mpg.de/21169426/1127-orni-penguins-nesting-in-a-dangerous-environment-obtain-large-quantities-of-sleep-via-seconds-long-microsleeps-154562-x

CRITICAL//Consensus

23 Dec, 17:14


Nick Land, Machnic Desire from Fanged Noumena

CRITICAL//Consensus

23 Dec, 17:00


Edgerunners

CRITICAL//Consensus

23 Dec, 16:24


Links for 2024-12-23

AI:

1. What o3 Becomes by 2028 — “We haven't seen AIs made from compute optimal LLMs pretrained on these systems yet, but the systems were around for 6+ months, so the AIs should start getting deployed imminently, and will become ubiquitous in 2025.” https://www.lesswrong.com/posts/NXTkEiaLA4JdS5vSZ/what-o3-becomes-by-2028

2. Orienting to 3 year AGI timelines https://www.lesswrong.com/posts/jb4bBdeEEeypNkqzj/orienting-to-3-year-agi-timelines

3. “AI that exceeds human performance in nearly every cognitive domain is almost certain to be built and deployed in the next few years. We need to act accordingly now.” https://milesbrundage.substack.com/p/times-up-for-ai-policy

4. The new hyped Genesis simulator is up to 10x slower, not 10-80x faster https://stoneztao.substack.com/p/the-new-hyped-genesis-simulator-is

5. Stanford researchers introduced a new system that can generate physically plausible human-object interactions from natural language https://hoifhli.github.io/

6. “We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!” https://arxiv.org/abs/2411.02280

7. A biologically-inspired hierarchical convolutional energy model predicts V4 responses to natural videos https://www.biorxiv.org/content/10.1101/2024.12.16.628781v1

8. LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation https://arxiv.org/abs/2412.15188

9. Memory Layers at Scale https://arxiv.org/abs/2412.09764

10. Deliberative alignment: reasoning enables safer language models https://openai.com/index/deliberative-alignment/

11. A foundation model for generalizable disease detection from retinal images https://www.nature.com/articles/s41586-023-06555-x

12. Prompting Depth Anything for 4K Resolution Accurate Metric Depth Estimation https://promptda.github.io/

13. ⇆ Marigold-DC: Zero-Shot Monocular Depth Completion with Guided Diffusion https://marigolddepthcompletion.github.io/

14. Startup’s autonomous drones precisely track warehouse inventories https://news.mit.edu/2024/corvus-autonomous-drones-precisely-track-warehouse-inventories-1220

15. MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials. https://news.mit.edu/2024/need-research-hypothesis-ask-ai-1219

16. Anthropic co-founder Jack Clark says when he and Dario Amodei went to the White House in 2023 where Kamala Harris told them, "We've got our eye on you guys. AI is going to be a really big deal and we're now actually paying attention" https://youtu.be/om2lIWXLLN4?si=ySlUNtsBzZVe7Tc-&t=705

17. UN Secretary-General Antonio Guterres tells the UN Security Council that technology will never move in the future as slowly as today https://x.com/tsarnick/status/1871016318247604624

Technology:

1. Aqueous Homogeneous Miniature Reactors Could Supply U.S. Bases With Unlimited Fuel https://www.forbes.com/sites/davidhambling/2024/12/05/miniature-reactors-could-supply-us-bases-with-unlimited-fuel/

2. A Billion Times Faster: Laser Neurons Ignite the Future of AI https://opg.optica.org/optica/fulltext.cfm?uri=optica-11-12-1690&id=565919

3. MIT engineers grow “high-rise” 3D chips https://news.mit.edu/2024/mit-engineers-grow-high-rise-3d-chips-1218

Math:

1. Tarski's high school algebra problem inquires whether all identities involving addition, multiplication, and exponentiation over the positive integers can be derived solely from a specific set of eleven axioms commonly taught in high school mathematics. https://en.wikipedia.org/wiki/Tarski%27s_high_school_algebra_problem

2. Mathematicians Uncover a New Way to Count Prime Numbers https://www.quantamagazine.org/mathematicians-uncover-a-new-way-to-count-prime-numbers-20241211/

Miscellaneous:

1. Game-Changing Dual Cancer Therapy Completely Eradicates Tumors Without Harsh Side Effects https://news.mit.edu/2024/implantable-microparticles-can-deliver-two-cancer-therapies-1028

2. Are most of senescent cells immune cells? https://www.nature.com/articles/s12276-024-01354-4

CRITICAL//Consensus

23 Dec, 16:24


It never ceases to amaze me how much progress has been made in the last few years.

Original source: https://www.youtube.com/watch?v=ctWfv4WUp2I

CRITICAL//Consensus

23 Dec, 16:16


GHOST, 2024

onespeg

CRITICAL//Consensus

02 Dec, 12:29


Akira 💊

CRITICAL//Consensus

02 Dec, 12:29


Architecture and cityscapes from Katsuhiro Otomo’s “Akira”

CRITICAL//Consensus

02 Dec, 12:23


8 лютого 1993 року!

CRITICAL//Consensus

02 Dec, 12:22


Links for 2024-11-30

AI:

1. Alibaba takes on OpenAI's o1 with new "Reasoning" AI model QwQ: Reflect Deeply on the Boundaries of the Unknown (32B beats O1 mini and competes with O1 preview) https://www.lesswrong.com/posts/eM77Zz8fTMcGpk6qo/new-o1-like-model-qwq-beats-claude-3-5-sonnet-with-only-32b [Terence Tao says QwQ seems to be significantly better at MO problems than previous open source models https://mathstodon.xyz/@tao/113568284621180843]

2. Eric Schmidt says he has been shocked recently by how quickly China are catching up to the US in AI and that the "threat escalation matrix" goes up with each level of improvement in the technology https://youtu.be/AjgwIRPnb_M?si=omP4RanS1PXKgGOk&t=1592

3. “most people don't know that the original research on scaling laws came from Baidu in 2017 – not OpenAI in 2020” https://x.com/jxmnop/status/1861473014673797411

4. A Revolution in How Robots Learn https://www.newyorker.com/magazine/2024/12/02/a-revolution-in-how-robots-learn [no paywall: https://archive.is/JVbJS]

5. Do you want to teach your robot with one demo, then sit back and watch it instantly perform that task? Instant Policy https://www.robot-learning.uk/instant-policy

6. Genomic Bottleneck Algorithm "...performs tasks like image recognition almost as effectively as state-of-the-art AI." https://www.cshl.edu/the-next-evolution-of-ai-begins-with-ours/

7. Star Attention: Efficient LLM Inference over Long Sequences https://arxiv.org/abs/2411.17116

8. Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS https://arxiv.org/abs/2411.18478

9. Predicting Emergent Capabilities by Finetuning — “We find that our emergence law can accurately predict the point of emergence up to 4x the FLOPs in advance.” https://arxiv.org/abs/2411.16035

10. A Foundation Model for the Earth System: “Aurora outperforms operational forecasts for air quality, ocean waves, tropical cyclone tracks, and high-resolution weather forecasting at orders of magnitude smaller computational expense than dedicated existing systems.” https://arxiv.org/abs/2405.13063

11. Announcing Agent Graph System — a new approach to multi-step agent tool use, improving GPT-4o success rate by 4x https://xpander.ai/2024/11/20/announcing-agent-graph-system/

12. OpenAI board chair Bret Taylor says there will be continuous improvement in the quality of AI for the foreseeable future because there are 3 independent areas in which progress can be made and "the likelihood that something stalls out is very low" https://youtu.be/U5HrbLHk0w4?si=iQy272t-BduEKe3H&t=296

13. Yann LeCun joins Sam Altman and Demis Hassabis in estimating AGI within the next 5-10 years. https://youtu.be/JAgHUDhaTU0?si=98ITctEctok0tVC9&t=4481

14. “We present an automatic framework to identify, interpret and steer neurons within LMM for safe AGI” https://arxiv.org/abs/2411.14982

15. Estimates of GPU or equivalent resources of large AI players for 2024/5 https://www.lesswrong.com/posts/bdQhzQsHjNrQp7cNS/estimates-of-gpu-or-equivalent-resources-of-large-ai-players

16. A machine learning study has found that the political leanings of economists not only influence the language they use to present their results in academic research, but also the very estimates they produce for several “objective” variables, such that the implied policy prescription is consistent with their partisan leaning. https://academic.oup.com/ej/article/134/662/2439/7659819

17. A new golden age of discovery https://deepmind.google/public-policy/ai-for-science/

18. “Aphantasia is a fascinating variation in human cognitive experience and may gesture toward the future success of multimodal AI technology.” https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-mental-modeling

19. OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease https://singularityhub.com/2024/11/29/openais-gpt-4o-makes-ai-clones-of-real-people-with-surprising-ease/

20. chatgpt voice as a real-time universal translator https://x.com/ky__zo/status/1861870413699272948

CRITICAL//Consensus

29 Nov, 15:40


ᴇɴᴅ ᴡᴏʀʟᴅꜱ, 2024

onespeg

CRITICAL//Consensus

29 Nov, 12:27


🔗

CRITICAL//Consensus

27 Nov, 15:08


HYPER-RACISM

CRITICAL//Consensus

21 Nov, 18:56


From the Cyberpunk tabletop RPG (1988)

CRITICAL//Consensus

21 Nov, 18:53


CCRU Anime Opening

CRITICAL//Consensus

21 Nov, 08:55


NEW - A church in Switzerland is now using an AI hologram of Jesus to take confessions from worshippers.

https://www.disclose.tv/id/z33eqyr1bv/

@disclosetv

CRITICAL//Consensus

21 Nov, 08:54


NEW - Figure robotics say "autonomous fleets" are here.

@disclosetv

CRITICAL//Consensus

19 Nov, 19:23


Waifu.ch 🌺

CRITICAL//Consensus

19 Nov, 19:23


Cyberpunk landscapes by Dana Ulama

CRITICAL//Consensus

19 Nov, 19:17


US government commission pushes Manhattan Project-style AI initiative

Read more: https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/

No paywall: https://archive.is/oFSsZ

Download the report here: https://www.uscc.gov/annual-report/2024-annual-report-congress

CRITICAL//Consensus

19 Nov, 19:17


Links for 2024-11-18

AI:

1. Gwern on diminishing returns to scaling and AI in China https://www.reddit.com/r/slatestarcodex/comments/1gsv897/gwern_on_the_diminishing_returns_to_scaling_and/

2. Training LLM agents via «embodied» exploration in their native environment – with short programs as motor commands. https://minihf.com/posts/2024-11-03-weave-agent-dev-log-2/

3. Interpretable Contrastive Monte Carlo Tree Search Reasoning https://arxiv.org/abs/2410.01707

4. AI is now designing chips for AI https://bigthink.com/the-future/ai-is-now-designing-chips-for-ai/

5. Stanford researchers introduced Virtual Lab, an AI system where LLM agents collaborate as research teams https://www.biorxiv.org/content/10.1101/2024.11.11.623004v1

6. XBOW's security AI finds a previously unknown bug in Scoold https://xbow.com/blog/xbow-scoold-vuln/

7. Study Finds ChatGPT Outperforms Doctors at Diagnosing Illness — Human doctors: 74%; Human doctors using ChatGPT: 76%; ChatGPT alone: 90% https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors-diagnosis.html [no paywall: https://archive.is/TmdRq]

8. Now anyone in LA can hail a Waymo robotaxi https://techcrunch.com/2024/11/12/now-anyone-in-la-can-hail-a-waymo-robotaxi/

9. Trump Team Is Seeking to Ease US Rules for Self-Driving Cars https://www.bloomberg.com/news/articles/2024-11-17/trump-team-said-to-want-to-ease-us-rules-for-self-driving-cars [no paywall: https://archive.is/S9qOq]

10. LLaVA-o1: Let Vision Language Models Reason Step-by-Step https://arxiv.org/abs/2411.10440

11. Nvidia presents LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models https://research.nvidia.com/labs/toronto-ai/LLaMA-Mesh/

12. How to create a rational LLM-based agent? using game-theoretic workflow! https://arxiv.org/abs/2411.05990

13. Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space https://arxiv.org/abs/2406.19370

14. The Dawn of GUI Agent: A Preliminary Case Study with Claude 3.5 Computer Use https://arxiv.org/abs/2411.10323

15. AI models opened up to US defense https://spectrum.ieee.org/ai-used-by-military

16. NASA & Microsoft unveiled Earth Copilot, an AI that answers simple questions about satellite data. https://www.youtube.com/watch?v=YJs6kU4BtR4&t=1s

17. Scaling Laws for Pre-training Agents and World Models https://arxiv.org/abs/2411.04434

Technology:

1. Design of facilitated dissociation enables control over cytokine signaling duration https://www.biorxiv.org/content/10.1101/2024.11.15.623900v1

2. Really doing mechanics at the quantum level https://nanoscale.blogspot.com/2024/11/really-doing-mechanics-at-quantum-level.html

3. Nuclear Power Was Once Shunned at Climate Talks. Now, It’s a Rising Star. https://www.nytimes.com/2024/11/15/climate/cop29-climate-nuclear-power.html [no paywall: https://archive.is/H9rr3]

Brain:

1. “Could long-term memories theoretically be extracted from the static structure of a preserved brain? In our new pre-print, we find the typical neuroscientist thinks there's about a 40% chance it could be done.” https://osf.io/preprints/psyarxiv/keq7w

2. “…even a von Neumann only had slightly better genes than everyone else, probably no more than a few hundred. Hence, anyone who does get thousands of extra good variants will be many SDs beyond what we currently see.” https://gwern.net/embryo-selection

Miscellaneous:

1. That orange streak is Mercury with its glowing sodium plasma tail, passing in front of the Pleiades cluster. https://spaceweathergallery.com/indiv_upload.php?upload_id=184299

2. “Show a typical economist a theory article, and watch how he ‘reads’ it: He reads the abstract and introduction, flips the theory pages, then reads the conclusion. If math is so enlightening, why do even the mathematically able routinely skip the math?” https://www.betonit.ai/p/economath_failshtml

3. “0.00385% of New York's population were responsible for 33% of the shoplifting arrests in the city.” https://x.com/cremieuxrecueil/status/1858243351234986401

CRITICAL//Consensus

19 Nov, 19:17


We need jungle I'm afraid

CRITICAL//Consensus

13 Nov, 16:32


NEW - First artwork by humanoid robot sells for over $1.0 million

READ: https://t.co/sijYf1YcL4

Follow @InsiderPaper for more news

CRITICAL//Consensus

13 Nov, 13:19


Atom Cyber

CRITICAL//Consensus

13 Nov, 12:02


Links for 2024-11-12


AI:

1. This study says it achieved human-level scores on the ARC challenge with LLMs using Test-Time Training (TTT). Can TTT combined with LLMs be enough to achieve abstract reasoning? https://ekinakyurek.github.io/papers/ttt.pdf [code: https://github.com/ekinakyurek/marc]

2. People underestimate how powerful test-time compute is: compute for longer, in parallel, or fork and branch arbitrarily—like cloning your mind 1,000 times and picking the best thoughts. https://www.tylerhouchin.com/blogs/entering-the-inference-era/

3. Improving semantic understanding in speech language models via brain-tuning — Speech models like Whisper can be improved by fine-tuning on brain data (fMRI collected by listening to podcasts). https://arxiv.org/abs/2410.09230

4. ManipGen: a sim2real agent for zero-shot manipulation. ManipGen handles complex tasks in the real world like organizing shelves, tidying cluttered tables, and more – all from text input and with on human demonstrations! https://mihdalal.github.io/manipgen/

5. Kinetix: an open-ended universe of physics-based tasks for RL https://kinetix-env.github.io/

6. Lucid V1:A world model that can emulate Minecraft environments in real-time on consumer hardware! https://ramimo.substack.com/p/lucid-v1-a-world-model-that-does

7. Dualformer: Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces https://arxiv.org/abs/2410.09918

8. Dreaming Out Loud: A Self-Synthesis Approach For Training Vision-Language Models With Developmentally Plausible Data https://arxiv.org/abs/2411.00828

9. Researchers say the path to wise AIs runs through metacognition https://arxiv.org/abs/2411.02478

10. Can AI models collectively match human forecasting abilities in predicting real-world events? The LLM ensemble outperformed the 50% baseline with a Brier score of 0.20 vs 0.25, matched human crowd accuracy (0.19 Brier score) with no statistically significant difference, showed 61% of predictions above the 50th percentile, and demonstrated significant forecast improvements of 17-28% when exposed to human median predictions, with prediction intervals narrowing from 17.75 to 14.22 for GPT-4 and 11.67 to 8.28 for Claude 2. https://www.science.org/doi/10.1126/sciadv.adp1528

11. OpenAI's Kevin Weil says AI models "are going to get smarter at an accelerating rate" with voice and translation at a "magical" level already and vision to come https://youtu.be/IxkvVZua28k?si=0nJsYDyqDRCARPuk&t=2126

Prediction markets:

1. Google difficulties in forecasting LLMs using a internal prediction market https://asteriskmag.com/issues/08/the-death-and-life-of-prediction-markets-at-google

2.The Online Sports Gambling Experiment Has Failed https://www.lesswrong.com/posts/tHiB8jLocbPLagYDZ/the-online-sports-gambling-experiment-has-failed

Health:

1. "James Fickel has dedicated $200 million he made betting on Ether to becoming one of the world’s biggest investors in" longevity & neuroscience https://www.bloomberg.com/news/articles/2024-11-11/crypto-millionaire-fuels-push-to-transform-brain-research [no paywall: https://archive.is/VhxMU]

2. When muscles work out, they help neurons to grow, a new study shows https://news.mit.edu/2024/when-muscles-work-out-they-help-neurons-grow-1112

3. What Ketamine Therapy Is Like https://www.lesswrong.com/posts/zgAws2AoFE3adigvy/what-ketamine-therapy-is-like

Technology:

1. "Neural Networks (MNIST inference) on the “3-cent” Microcontroller" (90% MNIST in 1 kiloword) https://cpldcpu.wordpress.com/2024/05/02/machine-learning-mnist-inference-on-the-3-cent-microcontroller/

2. Shocking New Memory Tech: Crystal-to-Glass Transformation Using a Billion Times Less Energy https://iisc.ac.in/events/self-shocks-turn-crystal-to-glass-at-ultralow-power-density/

Space:

1. “The radio source ASKAP J1935+2148 is an amazing thing…It could be a really weird pulsar, but nobody knows how a pulsar spinning so slowly could put out radio waves.” https://mathstodon.xyz/@johncarlosbaez/113460684713479446

CRITICAL//Consensus

12 Nov, 13:41


Dario Amodei, CEO of Anthropic, in an interview with Lex Friedman:

- In the unlikely scenario where we proceed in a straight line without bottlenecks on data, compute, or energy, we could reach AGI in the next few years. 2026 or 2027.

- AI models are on a trajectory to surpass human reasoning and performance at professional tasks: "if we extrapolate the straight curve within a few years, we will get to these models being above the highest professional level of humans"

- $100 billion AI data centers will be built by 2027 and he is bullish about powerful AI happening soon because we are starting to reach PhD-level intelligence

Watch the whole interview: https://youtu.be/ugvHCXCOmm4?si=fAy9LQvUTgB5olbJ

Read the transcript: https://lexfridman.com/dario-amodei-transcript/

CRITICAL//Consensus

11 Nov, 14:33


Onespeg

CRITICAL//Consensus

11 Nov, 14:26


Links for 2024-11-10

AI:

1. LLMs Look Increasingly Like General Reasoners: “I believe the new evidence should update all of us toward LLMs scaling straight to AGI, and therefore toward timelines being relatively short.” https://www.lesswrong.com/posts/wN4oWB4xhiiHJF9bS/llms-look-increasingly-like-general-reasoners

2. “VLMs can act as generative universal value functions. We discovered a simple method to generate zero-shot and few-shot values for 300+ robot tasks and 50+ datasets using SOTA VLMs like Gemini” https://generative-value-learning.github.io/

3. Geometry-Informed Neural Networks are evolving! Beyond faster training and improved shapes, GINNs surprised us with an emergent property – a structured latent space. https://arturs-berzins.github.io/GINN/

4. Dextrous Code Generation: Generating Robot Policy Code for High-Precision and Contact-Rich Manipulation Tasks https://dex-code-gen.github.io/dex-code-gen/

5. Groq CEO Jonathan Ross says generative AI is so powerful because it benefits from increasing compute to find solutions buried in search trees that otherwise wouldn't be found and which give AIs their intuition https://youtu.be/KhLXVRiZBdo?si=WOv5ZzclpGJlTJEr&t=575

6. Why AI Could Eat Quantum Computing’s Lunch https://www.technologyreview.com/2024/11/07/1106730/why-ai-could-eat-quantum-computings-lunch/ [no paywall: https://archive.is/R6kaY]

7. Why could a coding model trained on just 2.5T tokens compete with top-tier models like DeepSeekCoder (10T tokens) and QwenCoder (15T tokens)? https://opencoder-llm.github.io/

8. Debate May Help AI Models Converge on Truth https://www.quantamagazine.org/debate-may-help-ai-models-converge-on-truth-20241108/

9. “I don't see signs of these capability increases stopping or slowing down, and if they do continue I expect the impact on society to start accelerating as they exceed what an increasing fraction of humans can do. I think we could see serious changes in the next 2-5 years.” https://www.lesswrong.com/posts/CNA8ksMwcuXHPjXRt/personal-ai-planning

Health:

1. Three people with severely impaired vision who received stem-cell transplants have experienced substantial improvements in their sight https://www.nature.com/articles/d41586-024-03656-z

2. A ‘Crazy’ Idea for Treating Autoimmune Diseases Might Actually Work https://www.theatlantic.com/health/archive/2024/11/lupus-car-t-immune-reset-autoimmune-disease/680521/ [no paywall: https://archive.is/keEBa]

Political forecasting:

1. Polling by asking people about their neighbors: When does this work? Should people be doing more of it? And the connection to that French dude who bet on Trump https://statmodeling.stat.columbia.edu/2024/11/09/polling-by-asking-people-about-their-neighbors-when-does-this-work/

2. “…one election—or even several national elections—does not supply enough data to distinguish the performance of probabilistic forecasts that are so similar to each other.” https://statmodeling.stat.columbia.edu/2024/11/10/prediction-markets-in-2024-and-poll-aggregation-in-2008/

Miscellaneous:

1. “For decades, astrophysicists have been saying dark matter can't be mostly black holes. But this may be wrong: new calculations suggest up to 100% of dark matter could be black holes, about as heavy as asteroids, that formed in the very early universe.” https://mathstodon.xyz/@johncarlosbaez/113454218813011168

2. China’s Libertarian Medical City? https://marginalrevolution.com/marginalrevolution/2024/11/chinas-libertarian-medical-city.html

CRITICAL//Consensus

08 Nov, 13:17


😎

CRITICAL//Consensus

01 Nov, 12:50


Elon Musk
"The Neuralink implant itself could eventually cost around $1,000–$2,000 with high production.

The surgery, done by a robot, would be quick—our goal is a 10-minute, 600-second procedure.

All-inclusive, it could be priced similarly to LASIK, around $5,000."

SG🇸🇬 Transhumanism, AI and Technocracy
Channel🔗 | Discussion🔗

CRITICAL//Consensus

01 Nov, 12:49


Boo

CRITICAL//Consensus

30 Oct, 12:02


ep.1, 2024

Rui Huang

CRITICAL//Consensus

28 Oct, 04:11


Cyberpunk City - Lucy's Rooftop by E-Hsuan

CRITICAL//Consensus

28 Oct, 04:09


behind the scenes of Akira (1988)

CRITICAL//Consensus

28 Oct, 03:31


Its because Japan and England are linked on a spiritual level.

CRITICAL//Consensus

28 Oct, 03:29


Links for 2024-10-24

AI:

1. Global power demand for AI data centers could grow by more than 130 GW by 2030. https://ifp.org/future-of-ai-compute/

2. New SOTA for theorem proving model: InternLM2.5-StepProver and its critic, setting a new minif2f SOTA at 65.9%. No hallucination, verified math proofs by LLM reasoning. https://arxiv.org/abs/2410.15700

3. miniCTX, a new benchmark that tests a model's ability to prove theorems from complex, real Lean projects https://cmu-l3.github.io/minictx/

4. New follow-up work on the effects of synthetic data on model pre-training. It’s becoming increasingly clear that the model collapse issues predicted by prior works are not panning out in theory and practice. https://arxiv.org/abs/2410.16713

5. A Comparative Study on Reasoning Patterns of OpenAI's o1 Model https://arxiv.org/abs/2410.13639

6. A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstration https://arxiv.org/abs/2410.16540

7. Evaluating and enhancing probabilistic reasoning in language models https://research.google/blog/evaluating-and-enhancing-probabilistic-reasoning-in-language-models/

8. Multi-Agent AI and GPU-Powered Innovation in Sound-to-Text Technology https://developer.nvidia.com/blog/multi-agent-ai-and-gpu-powered-innovation-in-sound-to-text-technology/

9. "SimpleAutomation": You spend $200 on a robot, use a special joystick to control it, and do the tasks you need. After 10 minutes of examples, your tireless autonomous assistant is ready. https://github.com/1g0rrr/SimpleAutomation

10. LLMD: A Large Language Model for Interpreting Longitudinal Medical Records https://arxiv.org/abs/2410.12860

11. Microsoft CEO Satya Nadella says AI development is being optimized by OpenAI's o1 model and has entered a recursive phase https://youtu.be/kOkDTvsUuWA?si=PwvicSLriSR5nmbN&t=1865

12. Demis Hassabis says DeepMind's drug discovery spinoff Isomorphic will have drug treatments in the clinic in a couple of years tackling "six big areas of health" https://www.ft.com/content/72d2c2b1-493b-4520-ae10-41c1a7f3b7e4 [no paywall: https://archive.is/Nbx7M]

13. ChatGPT o1-preview can code Stan https://statmodeling.stat.columbia.edu/2024/10/22/chatgpt-o1-preview-can-code-stan/

14. “ChatGPT-O1 Changes Programming as a Profession. I really hated saying that.” https://www.youtube.com/watch?app=desktop&v=j0yKLumIbaM

15. Progress towards real-time generation from multimodal models https://openai.com/index/simplifying-stabilizing-and-scaling-continuous-time-consistency-models/

16. Musk and xAI pulled off a feat that usually takes four years, setting up a supercluster of 100,000 H200 GPUs in just 19 days. Nvidia's Jensen Huang called the effort "superhuman,". https://www.tomshardware.com/pc-components/gpus/elon-musk-took-19-days-to-set-up-100-000-nvidia-h200-gpus-process-normally-takes-4-years

17. Generating Distinct AI Voice Performances By Prompt Engineering GPT-4o https://minimaxir.com/2024/10/speech-prompt-engineering/

18. ETH Zurich researchers showcase a method using YOLO models to bypass reCAPTCHAv2 with 100% accuracy https://arxiv.org/abs/2409.08831

Technology:

1. Science Corporation, a leader in brain-computer interface (“BCI”) technology, has announced the preliminary clinical trials results for its PRIMA retinal implant. https://science.xyz/news/primavera-trial-preliminary-results/

2. Thermodynamic modeling to optimize the shape of a beer glass to keep it cold for the longest time. https://www.arxiv.org/abs/2410.12043

3. Solar transpiration–powered lithium extraction and storage https://techxplore.com/news/2024-10-solar-powered-lithium-brine.html

Science:

1. The insane capabilities of fish in tests of memory, learning, and problem-solving. https://80000hours.org/podcast/episodes/sebastien-moro-fish-cognition-senses-social-lives/

2. A substantial reduction of Alzheimers disease associated with semaglutide (Ozempic) in > 1 million people with Type 2 diabetes from a nationwide data resource https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/alz.14313

CRITICAL//Consensus

25 Oct, 14:10


https://www.newscientist.com/article/2452876-dna-has-been-modified-to-make-it-store-data-350-times-faster/

CRITICAL//Consensus

24 Oct, 12:53


Links for 2024-10-21

AI:

1. GenRM, an iterative algorithm that trains an LLM on self-generated reasoning traces, leading to synthetic preference labels matching human preference judgments https://arxiv.org/abs/2410.12832

2. Meta's Joe Spisak explains how AI models can train themselves by generating images, asking itself questions about them, and choosing the best answers, in order to move beyond human data and human fine-tuning, and teach itself from synthetic data https://youtu.be/QS7C3ZCI8Dw?si=yt7fFMuVoQc-TAf1&t=595

3. Robots can learn skills from YouTube without complex video processing. https://arxiv.org/abs/2410.09286

4. OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation https://ut-austin-rpl.github.io/OKAMI/

5. Researchers developed DIAMOND, an AI model that can generate a playable simulation of Counter-Strike at 10 fps within a neural network. https://diamond-wm.github.io/

6. OpenAI's Noam Brown says the new o1 model beats GPT-4o at math and code, and "these numbers, I can almost guarantee you, are going to go up over the next year or two" https://www.youtube.com/live/Gr_eYXdHFis?si=bCO09axuwC6_4Cn9&t=1329

7. Solving complex problems with OpenAI o1 models https://openai.com/business/solving-complex-problems-with-openai-o1-models/

8. Now Anyone Can Code: How AI Agents Can Build Your Whole App https://www.youtube.com/watch?v=jbIQfoldLag

9. Microsoft announces new autonomous agent capabilities across Copilot Studio and Dynamics 365. Agents will be capable of asynchronous orchestration of complex tasks over long time periods, and will independently learn and improve from memory. https://blogs.microsoft.com/blog/2024/10/21/new-autonomous-agents-scale-your-team-like-never-before/

10. OpenAI CPO Kevin Weil says the o1 reasoning model is only at the GPT-2 level so it will improve very quickly as they are only at the beginning of scaling the inference-time compute paradigm, and by the time competitors catch up to o1, they will be 3 steps ahead https://youtu.be/VsmEMUiPXIs?si=bwXhBdkSqMU2rrrQ&t=1159

11. Is addition all you need? This paper replaces a lot of LLM floating point multiplications with integer additions and the models still work but are 5-20x more computationally efficient. https://arxiv.org/abs/2410.00907

12. US Treasury Says AI Tools Prevented $1 Billion of Fraud in 2024 https://gizmodo.com/us-treasury-says-ai-tools-prevented-1-billion-of-fraud-in-2024-2000513080

13. San Francisco techies are loving Self-Driving Cars. Waymo is currently serving over 100,000 paid rides a week. https://www.wsj.com/tech/waymo-san-francisco-self-driving-robotaxis-uber-244feecf [no paywall: https://archive.is/Wo5wm]

Technology:

1. Why does the connectome seem to have so little relationship to neural activity in C elegant? https://www.biorxiv.org/content/10.1101/2024.09.22.614271v1

2. Computer simulations point the way towards better solar cells https://www.chalmers.se/en/current/news/f-computer-simulations-point-the-way-towards-better-solar-cells/

3. Ultra-Deep Fracking for Limitless Geothermal Power Is Possible: EPFL https://newatlas.com/energy/fracking-key-geothermal-power/

4. It’s Time to Build the Exoplanet Telescope https://www.palladiummag.com/2024/10/18/its-time-to-build-the-exoplanet-telescope/

Psychology:

1. Does learning music improve cognitive abilities? According to this new meta-analysis, the answer is yes! During chilhood and for a specific capacity: Inhibitory control. https://www.sciencedirect.com/science/article/abs/pii/S0010027724001999

2. Finland's baby bust extends to historically large Laestadian families https://yle.fi/a/74-20086218

3. Eight out of 10 South Koreans aged 19 to 34 viewed South Korea as “a hell,” while 7.5 out of 10 said they hoped to leave. https://english.hani.co.kr/arti/english_edition/e_national/922522.html

Miscellaneous:

1. The “Hubble tension”: A growing crisis in cosmology https://mathscholar.org/2024/10/the-hubble-tension-a-growing-crisis-in-cosmology/

CRITICAL//Consensus

24 Oct, 12:53


Links for 2024-10-18


AI:

1. Transformers can be trained to solve a 132-years old open problem: discovering global Lyapunov functions. https://arxiv.org/abs/2410.08304

2. DeepSeek is introducing Janus: a revolutionary autoregressive framework for multimodal AI! By decoupling visual encoding and unifying them with a single transformer, it outperforms previous models in both understanding and generation. https://arxiv.org/abs/2410.13848

3. nGPT: A hypersphere-based Transformer achieving 4-20x faster training and improved stability for LLMs. https://arxiv.org/abs/2410.01131

4. Google presents Inference Scaling for Long-Context Retrieval Augmented Generation — Finds that increasing inference computation leads to nearly linear gains in RAG perf when optimally allocated. https://arxiv.org/abs/2410.04343

5. A method to learn from internet-scale videos without action labels https://latentactionpretraining.github.io/

6. Combining next-token prediction and video diffusion in computer vision and robotics https://news.mit.edu/2024/combining-next-token-prediction-video-diffusion-computer-vision-robotics-1016

7. TopoLM: brain-like spatio-functional organization in a topographic language model https://arxiv.org/abs/2410.11516

8. The most compelling single-neuron evidence for the existence of a generative model in the human brain https://www.biorxiv.org/content/10.1101/2024.10.05.616828v2.abstract

9. Looking Inward: Language Models Can Learn About Themselves by Introspection https://arxiv.org/abs/2410.13787

10. Google Deepmind trained a grandmaster-level transformer chess player that achieves 2895 ELO, even on chess puzzles it has never seen before, with zero planning, by only predicting the next best move. https://arxiv.org/abs/2402.04494

11. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network? https://www.lesswrong.com/posts/hPGw7hWYbYyvDcqYK/evidence-against-learned-search-in-a-chess-playing-neural

12.The Premise of a New S-Curve in AI https://tomtunguz.com/elo-improvement/

13. Sam Altman says the most important piece of knowledge discovered in his lifetime was that scaling AI models leads to unbelievable, predictable improvements in intelligence and he wondered if he was crazy or in a cult when he tried to explain it to others and they didn't understand https://youtu.be/FVRHTWWEIz4?si=dvOzjcfwL1ZR-lVM&t=1226

14. Researchers from Google and UoW propose a new collaborative search algorithm to adapt LLM via swarm intelligence. https://arxiv.org/abs/2410.11163

15. Agent S is a new open agentic framework that enables autonomous interaction with computers through a GUI. https://arxiv.org/abs/2410.08164v1

16. Boston Dynamics and Toyota announced a partnership to accelerate the development of general-purpose humanoid robots https://pressroom.toyota.com/boston-dynamics-and-toyota-research-institute-announce-partnership-to-advance-robotics-research/

17. No Slowdown At TSMC, Thanks To The AI Gold Rush https://www.nextplatform.com/2024/10/17/no-slowdown-at-tsmc-thanks-to-the-ai-gold-rush/

18. U.S. Special Operations Command is seeking the ability to create AI-generated social media users https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/

19. AI Mathematical Olympiad - Progress Prize 2 https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2

Science:

1. “The galaxies were never supposed to be so bright. They were never supposed to be so big. And yet there they are—oddly large, luminous objects that keep appearing in images taken by the James Webb Space Telescope” https://www.quantamagazine.org/the-beautiful-confusion-of-the-first-billion-years-comes-into-view-20241009/

2. Owning a video game console may improve mental health. A natural Experiment: Due to a shortage, Japanese retailers allocated consoles to customers by lottery https://www.nature.com/articles/s41562-024-01948-y

3. Butterfly species’ big brains adapted giving them a survival edge, study finds https://www.sciencedaily.com/releases/2023/07/230712203752.htm

CRITICAL//Consensus

24 Oct, 12:51


The Ghost in the Shell: Fully Compiled (повна колекція у твердій палітурці)

CRITICAL//Consensus

24 Oct, 00:00


🔗

CRITICAL//Consensus

24 Oct, 00:00


«LAZARUS» нове оригінальне аніме

Режисер Шінічіро Ватанабе з Чадом Стахельскі (Джон Вік) як екшн-постановник.

Студія MAPPA Production

13 епізодів.

Прем'єра в 2025 році на Adult Swim.

CRITICAL//Consensus

21 Oct, 23:19


I WANT THE OPPOSITE OF TED K. I WANT MORE TECH I WANT TO RULE THE STARS AND THE EARTH. GREATNESS IS WITHIN OUR GRASP IF ONLY WE HAD THE WILL TO DO IT

CRITICAL//Consensus

18 Oct, 12:10


How Quantum Search Works — A Clear and Simple Demonstration

EDGE. Subscribe.

CRITICAL//Consensus

18 Oct, 04:59


Вот он: яркий и цветастый неоновый киберпанк!
Ночи в китайском городе Циндао
Cyberdelia #technology

CRITICAL//Consensus

18 Oct, 04:55


Onespeg

CRITICAL//Consensus

18 Oct, 04:49


Josan Gonzalez

CRITICAL//Consensus

18 Oct, 04:45


THEY CAN'T KEEP GETTING AWAY WITH IT

https://www.darkhorsedirect.com/products/cyberpunk-edgerunners-the-final-moment-premium-statue

CRITICAL//Consensus

18 Oct, 04:44


Demis Hassabis says it is wrong to think of AI as being just another technology; he says it will be "epochal defining" and will soon cure all diseases, solve climate and energy problems and enrich our lives

Original source: https://x.com/GoogleDeepMind/status/1846974292963066199