hckrnws
> Under the hood, a virtual table (vtable) is created for each class, and a pointer (vptr) to the vtable is added to each instance.
Coming from C++ I assumed this was the only way but Rust has an interesting approach where the single objects do not pay any cost because virtual dispatch is handled by fat pointers. So you carry around the `vptr` in fat pointers (`&dyn MyTrait`) only when needed, not in every instance.
There have been type-erasure libraries in c++ for a longish time that allow choosing inline vtables and inline storage. It's definitely been a widely talked about technique for at least 10 years (I see talks about Dyno from 2017).
> only when needed
Do you know how is this exactly deduced?
It's not. The user has to decide.
A specific type/reference to a type will always use static dispatch.
fn foo(bar: &Baz) { bar.thing(); }
A dyn trait reference will always use dynamic dispatch and carry around the vtable pointer.
fn foo(bar: &dyn BazTrait) { bar.thing(); }
Ah, I see. Do I understand correctly that this means that for a given instance of polymorphic object I can switch between static polymorphism and dynamic dispatch, and use them both simultaneously? How is this useful in practical terms, like why would I want to do it?
Sort of. Given an instance (can even be a primitive) you can obtain a dyn reference to a trait it implements simply by casting it.
let a: i32 = 12
let b = &a as &dyn std::string::ToString; // i32 implements the ToString trait
let c = a.to_string(); // Static dispatch
let d = b.to_string(); // Dynamic dispatch through dyn reference
Note that there's not really any polymorphic objects in rust. All polymorphism in this case goes through the dyn reference which contains a pointer to a vtable for a specific trait.
Additionally, going from a dyn reference to a type-specific reference is not easy. Also, certain methods and traits are not dyn-compatible, mostly due to generic parameters.
The main use comes in with various libraries. Doing dynamic dispatch on a specific type is not very useful, but your library might expose a trait which you then call some methods on. If you accept a generic parameter (eg. impl Trait) each such invocation will cause monomorphization (the function body is compiled separately for each generic type combination). This can obviously bloat compile times.
Using a dyn reference in your API will result in only a single version being compiled. The downside is the inability to inline or optimize based on the type.
One additional use I found is that you can sometimes get around the divergent expression type in match expressions. Say you need to print out some values of different types:
let value: &dyn Display = match foo { A(numeric_id) => &numeric_id, B(string_name) => &string_name, C => &"static str", };
This would not work without dyn as each value has a different type.
I think the question is, do you know at compile time what the concrete type is? In situations where you do, use static. (I'm not sure I'd call that "polymorphism". If you know the static type it's just a function on a type, and who cares that other types have functions with the same name?) But if you don't know the concrete type at compile time, then you must use dynamic dispatch.
And you can use each approach with the same type at different points in the code - even for the same function. It just depends on you local knowledge of the concrete type.
+1
Good point, thanks for sharing!
This is the standard type class approach. Haskell does the same thing.
While this is a great article, I feels it buries the lede.
For me, the key insight was from the last paragraph of the article:
C++23 introduces "deducing this", which is a way to avoid the performance cost of dynamic dispatch without needing to use tricks like CRRT, by writing:
class Base {
public:
auto foo(this auto&& self) -> int { return 77 + self.bar(); }
};
class Derived : public Base {
public:
auto bar() -> int { return 88; }
};
I wish the article had gone into more details on how this works and when you can use it, and what its limitations are.Thanks for the feedback, I'll consider expanding in a separate post
I wonder if I still have the link.
One of the papers I had bookmarked when toying with my own language design was someone that had worked out how to make interfaces as fast or faster than vtables by using perfect hashing and using the vtable as a hash table instead of a list.
You can also, when inlining a polymorphic call, put a conditional block in that bounces back to full dispatch if the call occasionally doesn’t match the common case. The problem with polymorphic inlining though is that it quickly resembles the exact sort of code we delete and replace with polymorphic dispatch:
if (typeof arg1 == “string”) {
} else if typeof arg1 === …) {
} else if {
} else if {
} else {
}As someone who's favorite language is C, I don't see what is wrong with that code? Sure, you need to extend it with a new subtype, but you also need to implement every virtual function anyway. And if you use switch instead of an if-else-chain the compiler will complain when you are missing a subtype.
What's wrong with it is, when I extend with a new subtype, I have to fix up the locations that use the type. Potentially all of the locations that use it - I at least have to look at all of them.
With the polymorphic approach, I just have to create the new subtype, and all the users can do the right thing (if they were written with polymorphism in mind, anyway - if they use virtual functions on the base class).
Why would I change the users at all instead of just modifying the dispatch method in the super type?
I see. Yes, you can do it that way.
Still... doing it the C++ way, I can just declare the sub type as deriving from the super type, and I don't have to fix up the super type.
That’s the OO way. Of which C++ is an instance.
Nice one, TIL
One caveat with "hash vtables" is that you only really see a performance win when the interface has a lot of specializations.
As I just mentioned in another reply, the problem they were trying to solve was hierarchies where it makes sense for a group of types to be constructed by the combination of two or three narrowly scoped interfaces.
For instance, if you treat some collections as read only, you can define comprehensions across them with a single implementation. But that means the mutators have to be contained in another type, which a subset will implement, and may have covariant inputs.
I've been thinking through what features I'd want in a language if I were designing one myself, and one of my desires is to have exhaustive matches on enums (which could be made of any primitive type) and sum types. The ability to generate perfect hashes at compile time was one of the things that falls out nicely from that
> using the vtable as a hash table instead of a list.
Could you explain this a bit more? The word "list" makes me think you might be thinking that virtual method lookup iterates over each element of the vtable, doing comparisons until it finds a match -- but I'm certain that this is not how virtual method invocation works in C++. The vtable is constructed at compile time and is already the simplest possible "perfect hashtable": a short, dense array with each virtual method mapping to a function pointer at a statically known index.
The problem they were trying to solve was multiple inheritance, and by nominal type not by code reuse. So interfaces, basically.
So these guys essentially assigned a hashcode to every function of every interface and then you would do dispatch instead of obj.vtable[12] you would do modular math x = singature.hash % len(obj.vtable) and call that.
I believe this was sometime around 2005-2008 and they found that it was fast enough on hardware of that era to be usable.
Thanks, I think I get it now. The hash value would be a pure function of the method's signature (argument types and return type) and its name, so that two interfaces with a same-name, same-signature method would hash to the same value and thus invoke the same underlying method; the constraints would be that, after modulo, different methods must map to different indices; and the objective function to minimise would be the vtable size (which I think would be common across all classes).
But maybe I don't get it, since this would require knowledge of all interfaces, and as soon as you require that, it's straightforward to build a minimal-size mapping from method name+signature to integer index: e.g., just form the union of all method declarations appearing in any interface, sort them lexicographically, and use a method's position in this sorted list as its index. Lookups in this map are only ever done at compile time so there's no runtime inefficiency to worry about.
The problem is that C++ stores the vtable inside the object, and the objects over which you're iterating often weren't allocated contiguously. Even when they are, if each object contains lots of other data, the vtables won't necessarily be close to each other. That means that invoking virtual functions inside a loop means a lot of cache misses, and since the data you're fetching will be a branch target, it's often hard to find other useful work to accomplish during the memory delay cycles. However, in a language where you can store a relatively tight array of object IDs (or even use tag bits in the this pointer), now you have a much higher cache hit rate on the indexes to your equally tight dispatch table, which will also have a high hit rate.
It's a fair amount of extra work, but in a hot loop it's sometimes worth it. "You can often solve correctness problems (tricky corner cases) by adding an extra layer of indirection. You can solve any performance problem by removing a layer of indirection."
At the time that article was published in SIGPLAN, the dominant language was Java, which is officially statically, strongly typed.
Although really it isn’t because the JVM is strongly typed but evaluates some things at load or first invocation so it allows some languages that run in the JVM to be a bit tricky. They first generics implementation on the JVM, called Pizza, leveraged this load time concretization to do its thing.
But if you have a language that can resolve the type system at link time then you can do this trick. Alternatively you could switch to cuckoo hashing and if you next module load starts causing collisions, then so be it.
"list" here does not refer to a "linked list". In more academic circles, a "list" referes to any linear container. Such as a Python List. In practice, C++ vtables are effectively structs containing function pointers.
That's because that type of code is actually better performing than the dynamic dispatch.
There's absolutely nothing wrong with this code. It's just that it's not as extensible
It's a 'closed world' representation where the code assumes it knows about every possibility. This make extension more difficult
The code itself is extraordinarily good and performant.
Nice overview, it misses other kinds of dispatch though.
With concepts, templates and compile time execution, there is no need for CRTP, and in addition it can cover for better error messages regarding what methods to dispatch to.
Fair. New C++ standards are providing great tools for compile-time everything
But still CRTP is widely used in low-latency environments :)
Since std::variant was introduced I use inheritance and virtual calls much less than before. It's faster, since variant dispatch (via std::visit) is basically a switch statement with all execution paths visible to the compiler and thus inlining is possible. Inheritance and virtual calls are nowadays only necessary in places where it's not possible to statically list all alternatives (where the set of derived classes is open).
Yeah for C++17 or above, it's a nicer and more performant alternative in most cases
An expressive combination is Static Polymorphism + Multiple Dispatch, which Julia resorts to when it can.
Crazy web design, by the way. Diggin' it very much.
:)
Good article, rare to see simple explanations of intricate C++ ideas.
Thank you :)
[dead]
Crafted by Rajat
Source Code