MirageLSD -- LSD stands for "Live-Stream Diffusion" -- is a live-streaming video transformation model that claims "zero latency." You can turn "any video, game, or camera feed into a new digital world, in real time."

MirageLSD -- LSD stands for "Live-Stream Diffusion" -- is a live-streaming video transformation model that claims "zero latency." You can turn "any video, game, or camera feed into a new digital world, in real time."
What if #AI could see the world like we do? That’s the idea behind #ComputerVision—machines interpreting visual data to navigate, detect, and decide. Our latest #ScienceGlossary entry explains how it works: http://go.tum.de/312381
I am transmitting a hiring opportunity for a good friend (@jblugagne). I have had the pleasure to know him and work with him for a long time, and warmly encourage interested people to apply. I have worked and am still working a bit on DeLTA myself, so I am biased but I think it's a very nice project :) Boosts appreciated!
> We are looking for a Senior Python Developer to join our group at the University of Oxford’s Department of Engineering Science. This is a full-time position focused on advancing our open-source computer vision software for quantitative microscopy, DeLTA. There will also be opportunities to explore commercial applications and contribute to potential spin-off efforts.
> We’re looking for someone with strong Python skills and experience in software release and management. Backgrounds in computer vision, machine learning, or microscopy are a plus.
How is #AI reshaping the way we study the human past? A news article in the Communications of the ACM by Karen Emslie highlights how archaeologists are using #machinelearning and #computervision to uncover the stories of human culture—from detecting submerged WWII aircraft to revealing hidden words in carbonized scrolls.
AI is helping researchers identify ancient sites, analyze large and diverse datasets, and even virtually "unwrap" fragile artifacts. Though challenges like data incompleteness remain, the potential for discovery is growing fast. Read more: https://cacm.acm.org/news/researchers-tap-ai-to-dig-into-the-past/
@theofficialacm #Archaeology #MachineLearning #DigitalHumanities #VesuviusChallenge #RemoteSensing #computervision
Data Annotation vs Data Labelling- Find the right for you
Key takeaways:
• Understand the core difference between annotation and labeling
• Explore use cases across NLP, computer vision & more
• Learn how each process impacts model training and accuracy
Read now to make smarter data decisions:
Philips Taps AI to Manage Unwieldy, Outdated Image Library
Every company’s marketing department has thousands of photos that teams must sort through to find matches for advertising…
#NewsBeep #News #US #USA #UnitedStates #UnitedStatesOfAmerica #Artificialintelligence #AI #ArtificialIntelligence #ComputerVision #Philips #PYMNTSNews #Technology #VertexAI
https://www.newsbeep.com/us/9893/
Google's Gemini Veo3 now turns photos into 8-second videos with audio The AI-powered feature includes built-in watermarks for transparency and authenticity
Limited to Pro & Ultra users in select regions. Read the article to learn how it works and who can access it.
#Google #GeminiAI #AIVideo #ArtificialIntelligence #ComputerVision
OpenCV Version 4.12.0 is now available! Highlights include: GIF decode and encode for imgcodecs, improved PNG and Animated PNG files handing, animated WebP Support, and especially the new HAL for RISC-V RVV 1.0 platforms.
Read more: https://opencv.org/blog/opencv-4-12-0-is-now-available/
#Introduction post again - had to reinstall my mastodon instance.
Hi! My name is Max.
I enjoy #coding and I do it for hobby and work for over 15 years. I have done #gamedevelopment #computervision #robotics #mobileapps for #nokia #symbian #android #ios #tizen #maemo. More recently, I make a living working with #React on #Frontend
On a lighter note, I enjoy various types of #rock music, I love #khruangbin #tameimpala #unknownmortalorchestra. I love #scifi#adventure movies.
New Brain-Like Vision AI Model Sees More Like a Human, Prioritizing Efficiency Over Scale
#AI #AIResearch #ComputerVision #NeuralNetworks #AIVision #VIsionAI #Robotics
Another one of my posts. This one on the topic of AI tools as assistive technology, what's working, what isn't and why, all without the hype that too many people tend to lean into when discussing this technology:
When Independence Meets Uncertainty: My Journey with AI-Powered Vision
A blind user's candid assessment of the promises and pitfalls of current AI accessibility tools
https://open.substack.com/pub/kaylielfox/p/when-independence-meets-uncertainty?utm_campaign=post&utm_medium=web
We have a new proposal for adding improvements for hardware acceleration, but that would require a breaking interface change.
What do you think? Feedback wanted!
Abel & I, very similar digital characters
https://neural.it/2025/07/abel-i-very-similar-digital-characters/
Dive into #ComputerVision with #Supervision from this #oSC25 talk! This talk shows how to streamline dataset loading, annotation & video analysis while staying lightweight for #edge & #IoT devices #AI #openSUSE https://www.youtube.com/watch?v=5CjYBrwhwS8
Привет, я Ярослав и хочу рассказать, как производили подсчет объема древесины с помощью Computer Vision
Отвечу почему мужик с линейкой не подойдет и почему нельзя просто взвесить Камаз до и после погрузки
“The nature of scientific progress is that it sometimes provides powerful tools that can be wielded for good or for ill: splitting the atom and nuclear weapons being a case in point. In such cases, it’s necessary that researchers involved in developing such #technologies participate actively in the ethical and political discussions about the appropriate boundaries for their use. Computer vision is one area in which more voices need to be heard.”
…
“This study backs up with clear evidence what many have long suspected: that computer-vision research is being used mainly in surveillance-enabling #applications.”
#ArtificialIntelligence / #ComputerVision / #research / #surveillance / #tech <https://www.nature.com/articles/d41586-025-01965-5>
It's a bit magical how adding diversity to a training dataset improves the result of the model. I was analyzing one microscopy experiment, and the segmentation model, trained on the same experiment, was doing well on this particular movie but not on others. I added a few more experiments to the training set, and now the model does much better even on experiments outside of the training set.
Как понять что свинюшка готова к любви? Определяем через ML
Привет! Я Ярослав Шмулев, датасаентист, выпускник МФТИ и технический директор студии R77. Мы внедряем AI в корпорации, а сегодня я расскажу, как мы анализировали поведение свинок с помощью ML, чтобы выявить идеальный момент для их оплодотворения.
Часть 2: Vision Transformer (ViT) — Когда трансформеры научились видеть
Представьте, что лингвист внезапно стал экспертом по живописи. Именно это произошло в 2020 году, когда архитектура для обработки текста — трансформеры — научилась "видеть" изображения. Vision Transformer (ViT) доказал: для понимания картинок не обязательны свёртки! Разберем "на пальцах" как она устроена и как изображения превращаются в предсказания.
JOB: Postdoc in Digital Humanities (Computer Vision & Performing Arts) at Université Rennes 2
Full-time, starting Oct 2025, part of ERC project STAGE.
Apply by 8 Sep 2025
#DigitalHumanities #ComputerVision #PerformingArts #Postdoc #ERC #JobOpportunity #CulturalHeritage
https://euraxess.ec.europa.eu/jobs/348852