Process Substitution is hella neat.
$ diff important-data.txt <(clipo |sort -u)
(Of course #mksh has it as well ;)
Process Substitution is hella neat.
$ diff important-data.txt <(clipo |sort -u)
(Of course #mksh has it as well ;)
@regehr @miodvallat incidentally, #POSIX requires that crash for sh
so mksh does it in the lksh
binary (uses long
for arithmetic as POSIX demands) as well.
The standard #mksh language provides 32-bit arithmetic with guaranteed 2s complement, wraparound, shr and sar both defined, negative modulus defined, etc. (and to get there, much cursing at C’s signed integers led me there eventually).
@justine @rl_dane @rubenerd @sherwoodinc tou get that with PgUp in #mksh in the default (-o emacs
, even though I’m no Emacs user) mode, fwiw (but you can rebind it to CurUp if you want).
@rl_dane nope, I’ve only ever used tar with files, floppy discs (the MirBSD boot floppy is a tape archive), and (inside the VM) an emulated HDD / (outside the VM) the backing store of that HDD for file transfer because I didn’t manage to set up network in some guests (Hurd, Plan 9) but needed to port #mksh to them.
(I even have a SCSI tape drive somewhere but no tape, I think.)
So… what option do I vote for?
rm'ed my .kshrc, no what?
@darkuncle @b0rk no -o pipefail
in #!/bin/sh
, you need #mksh or GNU #bash or so
> @rl_dane What's this doing?
[[ ${1:-} ]]
I use set -u
in my bash scripts, which causes the script to fail with a warning if you ever reference an undefined variable (it's a good discipline for for bash/ksh, like use strict; use warnings;
is for #Perl). Unfortunately, that means that referencing [[ $1 ]]
(test if $1 contains any text) will fail if no parameter is passed.
So, as a way around that, I call the "Use default values" shell substitution:
[[ ${1:-} ]]
Which basically says, "Use $1
if it is defined, otherwise substitute the following text: ''
"
Hit up the bash, sh or ksh manpage, and just search (/
) for ":-
". :)
I used to avoid the complex substitution syntax like the plague, because it's so hard to read.
But... it's efficient, as it doesn't have a fork()
penalty, and once you learn it, it's not tooo confusing.
I love elegant little things in shell:
function inst() { #is INSTalled?
type "$@" &>/dev/null
}
function runif() {
[[ ${1:-} ]] || return 1
inst "$1" && "$1"
}
...
elif $screen; then
clear
runif pfetch || runif ufetch
...
I've had several iterations of this function through the years, which I use to append something to the $PATH variable.
This is about as efficient as I can get it, I think, with the following constraints:
$PATH
$PATH
pushpath is a function
pushpath ()
{
[[ -n $1 ]] || return 1;
PATH=":$PATH:";
[[ $PATH == *:$1:* ]] || PATH+=":$1";
PATH="${PATH##:}";
PATH="${PATH%%:}";
PATH="${PATH//::/:}"
}
While we could add a bunch of CHECK_PORTABILITY_SKIP entries to skip these files, I prefer to fix things properly, and so have committed a general fix for this which should speed up most packages.
https://github.com/TritonDataCenter/pkgsrc/commit/df1af49c2953c235c8af501a53f13cdfa5bbf91c
Nice reduction in runtime from 86 seconds down to just 1!
If anyone has access to really old systems and are able to tell me if they do not have "read -n" then that'd be hugely appreciated, though it's likely we'd bootstrap #mksh on them anyway.
Interesting performance and behaviour difference between shells and the "read" builtin.
After upgrading to macOS Sonoma I noticed the #pkgsrc check-portability script occasionally taking over an hour and falling foul of my "ulimit -t 3600".
$ wc text-public.js.map
0 960590 13488786 text-public.js.map
Time to "read f < text-public.js.map" and wc $f:
#bash: 0.5 seconds
1 960590 13060347
#mksh: 6.2 seconds
15904 973513 13030108
#dash: 6.4 seconds
15903 973410 13033181
I now have a GHA CI build for #mksh on Debian 2.1 (“slink”).
And on sid, also with clang or gcc-snapshot, and tons of modern warning flags. But I’m only ⅔ as proud of that as of slink, having been my first Debian release.
@lanodan @cliffle yeah definitely, mirtoconf (“mksh/Build.sh
”) by design outputs all of stderr so the build log captures them.
In Debian, I sed(1)
a bit around so the build log checker doesn’t pick up warnings/errors from failed configure tests, though, but they’re still easily recognisable:
# Debian #492377, #572252
buildmeat() {
rm -f buildmeat.tmp
(set -x; env "$@"; echo $? >buildmeat.tmp) 2>&1 | sed --posix \
-e 's!conftest.c:\([0-9]*\(:[0-9]*\)*\): error:!cE(\1) -!g' \
-e 's!conftest.c:\([0-9]*\(:[0-9]*\)*\): warning:!cW(\1) -!g' \
-e 's!conftest.c:\([0-9]*\(:[0-9]*\)*\): note:!cN(\1) -!g' \
test -s buildmeat.tmp || return 255
return $(cat buildmeat.tmp)
}
Called as buildmeat sh Build.sh …
then. mirtoconf only uses conftest.c
which is not used elsehow in #mksh (or the other projects using it now, namely #pax, #sleep and in the future #jupp), so this works.
@ramsey @taylan @meuon @thepracticaldev that’s not true, subshells (whether from pipes or not) inherit the current execution environment, no need to export. (easy proof: $(…)
are also subshells)
It’s just that subshells cannot modify the parent’s environment, but that is also generally true in Unix.
Note I’m not promoting GNU bash but rather #mksh but thanks to POSIX the overlap is huge.
@wwwgem nroff user, mostly, not GNU gnroff.
I prefer nroff
with -mdoc
when the primary end result does not need pictures or where plaintext availability has high value (e.g. manpages, but also other systems documentation). This is very nice to write, much nicer than Tₑχ/LᴬTᴇΧ or -man
, and semantic markup.
Then GNU groff can be used to provide an additional PDF which is at least somewhat legible, and I do my own HTML conversation from the plaintext output, but mdocml (now called mandoc
) can be used to create somewhat good HTML if you stick to -mdoc
commands instead of using nroff primitives. (Which I tend to not do.)
I very much dislike how the GNU g{,n}roff macropackages have changed with the last release. The MirBSD nroff macropackages, specifically -mdoc
, work well with GNU g{,n}roff and mostly avoid the pain. (Writing \-
and a font hack were still needed.)
nroff with -man
is just ugly and awful, stay away from it.
nroff with -ms
(+), -me
(ref), etc. is also possible, but I found -mdoc
more modern and therefore less buggy.
I haven’t yet used neqn
(doc, guide) or pic
and only a little tbl
(doc) (mostly, the native -mdoc
tables suffice); AT&T nroff does not have a working pic
and it doesn’t transfer to plaintext output well anyway.
I use Tₑχ/LᴬTᴇΧ when the end result primarily must be a PDF with pretty pictures (such as the installation manual of a software at $dayjob we handed to paying customers) or for more programmability. Though copy/paste from those PDFs is so bonkers I patched lstlisting
to also dump the listings to a .lst
file we provide along, so the admin can copy/paste the commands, examples and config files from there.
(I’ve never used $…$
math mode. I’m not in academics ☻)
I’ve pimped both (have my own Tₑχ/LᴬTᴇΧ styles/class and packages, and tweaked my groff fonts (example) and bugfixed the raw roff that implements the macros). I use both depending on where.
For my Mu͒seScore workshop, I even have a link list (source) written in a roff-like format that I convert to both Tₑχ/LᴬTᴇΧ (for PDF output) and HTML (for the website) using a Korn shell script, so much I like the format and structure.
Otherwise I’m somewhat a fan of plaintext (with UTF-8 line drawing, etc.) but not rST or md, and a large fan of just handwriting XHTML/1.1 snippets that can be included in webpages. (This ofc doesn’t transfer well to PDF.)