Welcome to This Week in D! Each week, we'll summarize what's been going on in the D community and write brief advice columns to help you get the most out of the D Programming Language.
The D Programming Language is a general purpose programming language that offers modern convenience, modeling power, and native efficiency with a familiar C-style syntax.
This Week in D has an RSS feed.
This Week in D is edited by Adam D. Ruppe. Contact me with any questions, comments, or contributions.
Stefan Koch has been hard at work at a new CTFE implementation in the compiler over the last couple months, inspired by DConf. I asked him to write a little on how his work is progressing. From his discussions on the implementation on the newsgroup and IRC, he has already succeeded in an enormous speedup and has brought memory usage down to what the CTFE function *should* use, with only a small, constant overhead instead of enormous multi-times leak like the current implementation.
The following is in his own words:
I started the implementation about a month ago. Before I started I was still agonizing about the design.
My criteria where the following:
Specifically the last two points are a huge successes already. As I could implement a pseudo jit with very little fussing. Which again allowed me to make a very good guess about performance implications of my design.
So far it looks like there will be a 4.5x penalty when compared to the same code executed natively. (At least when compiled with dmd... ldc will replace the jitted code with a constant)
I am currently working on supporting function calls in a lazy multithreadable manner. BTW. The whole ctfe engine is built to be multithreadable from the start.
Below is a verbatim dumb of the last bytecode-generator api.
BCTemporary genTemporary(BCType bct); void incSp(BCValue val); StackAddr currSp(); BCLabel genLabel(); BCAddr beginJmp(); void endJmp(BCAddr atIp, BCLabel target); BCAddr genJump(BCLabel target); CndJmpBegin beginCndJmp(BCValue cond = BCValue.init, bool ifTrue = false); void endCndJmp(CndJmpBegin jmp, BCLabel target); void emitFlg(BCValue lhs); void Set(BCValue lhs, BCValue rhs); void Lt3(BCValue result, BCValue lhs, BCValue rhs); void Gt3(BCValue result, BCValue lhs, BCValue rhs); void Eq3(BCValue result, BCValue lhs, BCValue rhs); void Add3(BCValue result, BCValue lhs, BCValue rhs); void Sub3(BCValue result, BCValue lhs, BCValue rhs); void Mul3(BCValue result, BCValue lhs, BCValue rhs); void Div3(BCValue result, BCValue lhs, BCValue rhs); void And3(BCValue result, BCValue lhs, BCValue rhs); void Or3(BCValue result, BCValue lhs, BCValue rhs); void Xor3(BCValue result, BCValue lhs, BCValue rhs); void Lsh3(BCValue result, BCValue lhs, BCValue rhs); void Rsh3(BCValue result, BCValue lhs, BCValue rhs); void Mod3(BCValue result, BCValue lhs, BCValue rhs); void Call(BCValue result, BCValue fn, BCValue[] args); void Load32(StackAddr toAddr, BCValue from); void Not(BCValue val); void Ret(BCValue val);
As I said before it is designed to be very malleable. the bytecode-generator is passed to the AST-Visitor as a template arguments. therefore allowing inline of the bytecode-generation calls. Which effectively makes it a zero-cost abstraction.
Currently I still place value in the ability to run the bytecode interpreter itself at compile time. This allows for nice compile-time-compilers :)
The way it looks right now this will probably become something like LLVM in D Style :)
P.S. please excuse the hackish nature of this article.
See more at the announce forum.
To learn more about D and what's happening in D: