On 11/07/2012 01:50 AM, Reinhard Kotucha wrote:
What I don't grok is why it's 20 times slower to load a file at once than in little chunks.
Traditional Lua reads *a file contents in BUFSIZ chunks, that are then concatenated with the buffer that is being built up internally. The resulting [re|m]alloc()s slow down the reading (a lot). The patched version we use in luatex reads files < 100MB in one block, and falls back to standard lua behaviour in other cases. The 100MB ceiling is arbitrary, and I could remove the limit. To be honest, I am not quite sure any more why I put that limit in there in the first place. On the garbage collector: it does not do very well if you increase the memory requirements fast. That is just the way sweeping GC's work. It would be nice if lua would do reference counting instead, but that is a lot of work and quite hard to get right, resulting in, afaik, all attempts at implementing that being abandoned before reaching production quality. Best wishes, Taco