mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

16K
active users

#ApacheFlink

1 post1 participant0 posts today

Atlassian introduced Lithium - an in-house #ETL platform designed to meet the requirements of dynamic data movement.

Lithium simplifies cloud migrations, scheduled backups, and in-flight data validations with ephemeral pipelines and tenant-level isolation - ensuring efficiency, scalability & cost savings.

📢 InfoQ spoke with Niraj Mishra, Principal Engineer at Atlassian, about Lithium’s implementation and future.

🔗 Read more here: bit.ly/415RPYZ

#DataPipelines #KafkaStreams #ApacheKafka #ApacheFlink #SoftwareArchitecture

🎃The October issue of #CheckpointChronicle is now out 🌟

It covers Ververica's Fluss, #ApacheFlink 2.0, Iggy.rs, Strimzi's support for #ApacheKafka 4.0, tons of OTF material from @vanlightly, Christian Hollinger's write up of ngrok's data platform, nice detail of how SmartNews use #ApacheIceberg with Flink and #ApacheSpark, a good writeup from Sudhendu Pandey on #ApachePolaris, notes from Kir Titievsky on Kafka's Avro serialisers, and much more!

dcbl.link/cc-oct242

Continued thread

🗓️ Wednesday September 18, 2024 at 9am PDT | 12noon EDT | 6pm CEST
Come learn **#ApacheKafka** with @celeste ! In this 2 hour workshop you’ll learn the basic components of Kafka and how to get started with data streaming using Python. We'll also give a brief introduction to transforming your data using **#ApacheFlink**.
Read more and register ➡️ aiven.io/workshop/movie-recomm

Build a movie recommendation app with TensorFlow and pgvector
AivenWorkshop | Movie recommender, using PostgreSQL® and pgvectorBuild a movie recommendation app with TensorFlow and pgvector.

#CaseStudy - Discover how #Yelp reworked its data streaming architecture with #ApacheBeam & #ApacheFlink!

The company replaced a fragmented set of data pipelines for streaming transactional data into its analytical systems, like Amazon Redshift and in-house data lake, using Apache data streaming projects to create a unified and flexible solution.

Dive into the details: bit.ly/3WgkTL7

Do you know SQL? Exactly!

Most databases and data processing tools support SQL for exactly that reason. And we see a strong movement for all of them to get closer to the standard, day by day.

In this weeks episode of the Cloud Commute podcast, our host @noctarius2k talks with @gunnarmorling from #Decodable about the benefits of #SQL, how #CDC (change data capture) works and why Decodable uses #ApacheFlink as the underlying technology for its #StreamProcessing offering.

youtu.be/qrWBboOPY5U

✍️Blogged: Flink SQL—Misconfiguration, Misunderstanding, and Mishaps

🫖 Pull up a comfy chair, grab a mug of tea, and settle in to read about my adventures troubleshooting some gnarly #ApacheFlink problems ranging from the simple to the ridiculous…

🔗 dcbl.link/troubleshooting-flin

👉 Topics include:

🤔 What's Running Where? (Fun with Java Versions)
🤨 What's Running Where? (Fun with JAR dependencies)
😵 What's Running Where? (Not So Much Fun with Hive MetaStore)