Well, the site is titled "Stevey's Drunken Blog Rants".
Didn't notice that. I find the
ad hominems against Larry Wall somewhat disturbing. Yes, the guy is an autocrat when it comes to Perl, and yes he has strong delusions of grandeur. But I'd have toned it down. You can point out those effects on the final product without the value judgment.
Apart from a different choice in operators, I honestly don't see a good way to make pointers any simpler or clearer than they are in C...
Agreed, and I'm not suggesting we try. The pointer data type is crucial to C programming, and it's not a difficult concept. There are two typical "gotchas" and you've identified one of them: pointers to function types. The other is related, and that's whether array or pointer operators take precedence, a.k.a. how properly to declare and handle
char *argv[].
Contexts were what finally pushed me to lump Perl in with the esoteric languages. How can anyone treat such an inherently obfuscated language as anything but a joke?
Perl evangelists tell you it's because you aren't smart enough to understand it or open-minded enough to realize how beautiful it is. One
can write comprehensible Perl. But the evangelists will tell you you're not using the language to its full potential.
There's often a tradeoff with abstraction. For instance, vector math can suffer from treating vectors as discrete entities when you aren't using SIMD to operate on the components in parallel and can operate on each component individually.
Very astute. There's the perennial question of whether a vector should be
struct vec3 {
double x, y, z;
};
or
struct vec3 {
double coords[ 3 ];
};
In the latter case affine arithmetic can be implemented as a loop. Optimizing compilers for vector CPU architectures know how to vectorize inner loops, so they implement looped arithmetic as vector instructions. This is how we did it on the old IBM/3090 and on the Cray. For E2, E3, or P3 points the speedup isn't especially noticeable since the setup required to start the vector instructions eats up most of the vectorization. But if you're transforming a NURBS surface with 200 control points and a 190-element knot vector, the speedup is enormous.
Working with vectors means tracking intermediate values for all components which can mean extra work moving data around, while working on one set of components at a time can allow the same calculation to be done with fewer registers and no reference to main memory.
Well we got lazy on the Cray, which stores intermediates in cache registers where they can be fetched into CPU registers in 1 cycle.
However, operator overloading is fundamentally just a different way to say which functions you're calling. I have a hard time seeing well-written operator overloading accounting for a 400% increase in execution time over equivalent C code.
Yeah I'd still like to see that code, because I feel confident that either I or someone on my team could have made it run faster. In the straightforward C++ programming mindset you're probably going to create lots of temporaries. That's C++'s dirty little secret. But there is some syntactic handwaving you can do (e.g., make sure all your RHS values are received as references) to make it so that for most arithmetic you're dealing (under the hood) with references to existing objects. Back when we were doing this for the aforementioned spline surfaces as first-class objects, we had some whiz programmers who knew how to do this.
Representing design geometry as C++ objects really generates a lot of very fun debate. Lots of size/speed tradeoff discussions as well as where to generalize the arithmetic for maintainability. If the dimensionality of a point or vector is a stored parameter, then linear arithmetic can be implemented in a common base class as loops bounded by the dimensionality. But how to store it effectively, when your model is of a Boeing 777 airframe containing approximately 67 gazillion points?