Is 100M rows in a database enough to be impressive?

I now have 8 on my desk...
I might have a problem.

Raspi cluster is handling 3k SQL Inserts /second.
At this rate importing 2 Billion rows will take 80 days :(

3000 SQL inserts per second on my raspis... not fast enough yet.

Hmm, have you ever inserted 2 Billion rows into a database all at once?

I've got cockroachdb running on my pi cluster.
Anyone have a ~10TB database they want to play with?

C++ Creator Bjarne Stroustrup Weighs in on Distributed Systems, Type Safety and Rust – The New Stack 

Anyone have a "big data" problem to play with?

Is it wrong that I drool over the cosmo communicator?

Aight, I think all the raspi's are running boinc now

ubuntu@node1:~$ sudo gluster volume info

Volume Name: gv0
Type: Disperse
Volume ID: a1ec6b0b-125d-407e-9342-ad614d801d67
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
ubuntu@node1:~$ df -h /gluster/
Filesystem Size Used Avail Use% Mounted on
192.168.1.3:/gv0 15T 148G 14T 2% /gluster

Show more
Mastodon 🐘

Discover & explore Mastodon with no ads and no surveillance. Publish anything you want on Mastodon: links, pictures, text, audio & video.