воскресенье, 30 января 2011 г.

moving to Intel Threading Blocks

deeper and deeper into concurrency and multi threading...

I have decision to use Intel Threading Building Blocks instead of experimental boost::thread_pool. All computations in the Glow engine are organized into tasks and theoretically would scale well when number of hardware threads (CPU cores) will be increased.

Previously, I used boost::thread_pool for task execution. Now, when I returned to concurrency, I realized that usage of boost::thread_pool is not scalable, because it heavily uses mutexes (spin-based). Later I will launch tests to measure scalability of both methods and post results here.

среда, 26 января 2011 г.

I am thinking about concurrent scene graph.

The are several modules in the engine, which needs queries to the scene graph, like Physics, Audio, Render. Moreover, in the future, several threads (tasks) inside of module will need concurrent access to the scene graph.

There several options how to do it:

1) copies of scene graph in every module (used also in OpenSceneGraph engine)- needs more memory, but should hold different types of objects - Physics module will hold only physics objects etc. Still it does not resolve problem with concurrent access in the future

2) concurrent containers in the scene graph - maybe the best solution, but currently I can not find best fit for it. Also looks like it hardware dependent and not robust enought. There is Intel Thread Building Blocks library with concurrent containers, but it's not perfect and doesn't support Android and iOS.

3) mutexes to isolate queries. Usage of mutex is not good thing for scalable multi threading, but maybe for some usage patterns it's ok. In my case it's SINGLE PRODUCER MULTIPLY CONSUMERS model.

Can anyone suggest robust cross platform concurrent containers (hash_map or vector at least) ?

excellent blog http://www.1024cores.net about concurrency and scalability.

UPD: I was wrong about Intel TBB, it's cool and powerful library. It's not clear now for me, how to use it under iOS or Android, but it's Open Source and ports are available for different platforms.

воскресенье, 23 января 2011 г.

понедельник, 17 января 2011 г.

Memory Management in the Glow engine

Memory management is very important thing in a console world. We can forget about it on a PC (at least for several hours without leaks, but consoles are restricted, so we should consider that after some time, allocation of memory will return NULL (or exception, depending on compiler). Also fragmentation is serious problem after allocation of many small objects. So with the plans to move on mobile devices, we decided to develop system for memory management.

New memory manager is implemented for Glow engine. Separate optimized singlethreaded heaps with pools for small objects for every module of the engine (physics, ai, navigation, sounds etc) are added instead of the single common C++ heap.

Special heap for transferring messages between modules, memory leak debugging, shared pointers (boost c++ library) for memory arrays.

Allocator for STL containers was implemented, so every memory allocation in the specific module goes from the module heap. STL containers and strings are evil force of a memory fragmentation, so process for removing every STL container from code is started. Some vectors are replaced by special Stack vector container, based on boost::array with restricted dynamic grow. Also major part of constant std::string was replaced with const char* equivalent (with memory allocated from constant string pool) and "copy on create if exists" usage.

Memory debugger UI, with rendering of memory fragmentation, information about allocated blocks - count, amount, overhead.



Remaining part for the memory manager implementation - garbage collector for Geometry.

January release date was very optimistic, so we moved release of alpha version to Spring, 2011. Major parts are ready, but there are many small task to polish editor and engine.