mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

13K
active users

#localai

6 posts5 participants0 posts today
Replied in thread

@Catvalente

Or just use you AI locally 🦾 💻 🧠

I completely understand the concerns about relying too heavily on AI, especially cloud-based, centralized models like ChatGPT. The issues of privacy, energy consumption, and the potential for misuse are very real and valid. However, I believe there's a middle ground that allows us to benefit from the advantages of AI without compromising our values or autonomy.

Instead of rejecting AI outright, we can opt for open-source models that run on local hardware. I've been using local language models (LLMs) on my own hardware. This approach offers several benefits:

- Privacy - By running models locally, we can ensure that our data stays within our control and isn't sent to third-party servers.

- Transparency - Open-source models allow us to understand how the AI works, making it easier to identify and correct biases or errors.

- Customization - Local models can be tailored to our specific needs, whether it's for accessibility, learning, or creative projects.

- Energy Efficiency - Local processing can be more energy-efficient than relying on large, centralized data centers.

- Empowerment - Using AI as a tool to augment our own abilities, rather than replacing them, can help us learn and grow. It's about leveraging technology to enhance our human potential, not diminish it.

For example, I use local LLMs for tasks like proofreading, transcribing audio, and even generating image descriptions. Instead of ChatGPT and Grok, I utilize Jan.ai with Mistral, Llama, OpenCoder, Qwen3, R1, WhisperAI, and Piper. These tools help me be more productive and creative, but they don't replace my own thinking or decision-making.

It's also crucial to advocate for policies and practices that ensure AI is used ethically and responsibly. This includes pushing back against government overreach and corporate misuse, as well as supporting initiatives that promote open-source and accessible technologies.

In conclusion, while it's important to be critical of AI and its potential downsides, I believe that a balanced, thoughtful approach can allow us to harness its benefits without sacrificing our values. Let's choose to be informed, engaged, and proactive in shaping the future of AI.

CC: @Catvalente @audubonballroon
@calsnoboarder @craigduncan

TechCrunch: Google quietly released an app that lets you download and run AI models locally. “Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into […]

https://rbfirehose.com/2025/06/01/techcrunch-google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · TechCrunch: Google quietly released an app that lets you download and run AI models locally | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose
Replied in thread

@system76
I love #LLM, or as they're often called, #AI, especially when used locally. Local models are incredibly effective for enhancing daily tasks like proofreading, checking emails for spelling and grammatical errors, quickly creating image descriptions, transcribing audio to text, or even finding that one quote buried in tons of files that answers a recurring question.

However, if I wanted to be fully transparent to #bigtech, I would use Windows and Android with all the "big brotherly goodness" baked into them. That's why I hope these tools don't connect to third-party servers.

So, my question to you is: Do you propose a privacy-oriented and locally/self-hosted first LLM?

I'm not opposed to the general notion of using AI, and if done locally and open-source, I really think it could enhance the desktop experience. Even the terminal could use some AI integration, especially for spell-checking and syntax-checking those convoluted and long commands. I would love a self-hosted integration of some AI features. 🌟💻
#OpenSource #Privacy #AI #LocalModels #SelfHosted #LinuxAI #LocalLLM #LocalAI

MakeUseOf: Anyone Can Enjoy the Benefits of a Local LLM With These 5 Apps . “Cloud-based AI chatbots like ChatGPT and Gemini are convenient, but they come with trade-offs. Running a local LLM—the tech behind the AI chatbot—puts you in control, offering offline access and stronger data privacy. And while it might sound technical, the right apps make it easy for anyone to get started.”

https://rbfirehose.com/2025/05/19/makeuseof-anyone-can-enjoy-the-benefits-of-a-local-llm-with-these-5-apps/

#AI#Apps#howto

I have a specific interest in eventually testing the usefulness of small #localAI genAI models which can run on hardware I can afford, so I'll probably be posting about it from time to time.

From the previously boosted article:

> AI models are often criticized for taking too much energy to train and operate. But lightweight LLMs, such as BitNet b1.58 2B4T, could help us run AI models locally on less powerful hardware. This could reduce our dependence on massive data centers