Main Page | Modules | Namespace List | Class Hierarchy | Class List | Directories | File List | Namespace Members | Class Members | File Members

mapdb_i.h

Go to the documentation of this file.
00001 // AUTOMATICALLY GENERATED -- DO NOT EDIT!         -*- c++ -*-
00002 
00003 #ifndef mapdb_i_h
00004 #define mapdb_i_h
00005 
00006 #include "config.h"
00007 
00008 /* The mapping database.
00009 
00010  * This implementation encodes mapping trees in very compact arrays of
00011  * fixed sizes, prefixed by a tree header (Mapping_tree).  Array
00012  * sizes can vary from 4 mappings to 4<<15 mappings.  For each size,
00013  * we set up a slab allocator.  To grow or shrink the size of an
00014  * array, we have to allocate a larger or smaller tree from the
00015  * corresponding allocator and then copy the array elements.
00016  * 
00017  * The array elements (Mapping) contain a tree depth element.  This
00018  * depth and the relative position in the array is all information we
00019  * need to derive tree structure information.  Here is an example:
00020  * 
00021  * array
00022  * element   depth
00023  * number    value    comment
00024  * --------------------------
00025  * 0         0        Sigma0 mapping
00026  * 1         1        child of element #0 with depth 0
00027  * 2         2        child of element #1 with depth 1
00028  * 3         2        child of element #1 with depth 1
00029  * 4         3        child of element #3 with depth 2
00030  * 5         2        child of element #1 with depth 1
00031  * 6         3        child of element #5 with depth 2
00032  * 7         1        child of element #0 with depth 0
00033  * 
00034  * This array is a pre-order encoding of the following tree:
00035  * 
00036  *                   0
00037  *                /     \ 
00038  *               1       7
00039  *            /  |  \                   
00040  *           2   3   5
00041  *               |   |
00042  *               4   6
00043                    
00044  * IDEAS for enhancing this implementation: 
00045 
00046  * We often have to find a tree header corresponding to a mapping.
00047  * Currently, we do this by iterating backwards over the array
00048  * containing the mappings until we find the Sigma0 mapping, from
00049  * whose address we can compute the address of the tree header.  If
00050  * this becomes a problem, we could add one more byte to the mappings
00051  * with a hint (negative array offset) where to find the sigma0
00052  * mapping.  (If the hint value overflows, just iterate using the hint
00053  * value of the mapping we find with the first hint value.)  Another
00054  * idea (from Adam) would be to just look up the tree header by using
00055  * the physical address from the page-table lookup, but we would need
00056  * to change the interface of the mapping database for that (pass in
00057  * the physical address at all times), or we would have to include the
00058  * physical address (or just the address of the tree header) in the
00059  * Mapdb-user-visible Mapping (which could be different from the
00060  * internal tree representation).  (XXX: Implementing one of these
00061  * ideas is probably worthwile doing!)
00062 
00063  * Instead of copying whole trees around when they grow or shrink a
00064  * lot, or copying parts of trees when inserting an element, we could
00065  * give up the array representation and add a "next" pointer to the
00066  * elements -- that is, keep the tree of mappings in a
00067  * pre-order-encoded singly-linked list (credits to: Christan Szmajda
00068  * and Adam Wiggins).  24 bits would probably be enough to encode that
00069  * pointer.  Disadvantages: Mapping entries would be larger, and the
00070  * cache-friendly space-locality of tree entries would be lost.
00071 
00072  * The current handling of superpages sucks rocks both in this module
00073  * and in our user, map_util.cpp.  We could support multiple page sizes
00074  * by not using a physframe[] array only for the largest page size.
00075  * (Entries of that array point to the top-level mappings -- sigma0
00076  * mappings.)  Mapping-tree entries would then either be regular
00077  * mappings or pointers to an array of mappings of the next-smaller
00078  * size. (credits: Christan Szmajda)
00079  * One problem with this approach is that lookups become more expensive as
00080  * a task's mapping potentially can be in many subtrees.
00081  *
00082  *        physframe[]
00083  *        -------------------------------
00084  *        | | | | | | | | | | | | | | | | array of ptr to 4M Mapping_tree's
00085  *        ---|---------------------------
00086  *           |
00087  *           v a Mapping_tree
00088  *           ---------------
00089  *           |             | tree header
00090  *           |-------------|
00091  *           |             | Mapping *or* ptr to array of ptr to 4K trees
00092  *           |             | e.g.
00093  *           |      ----------------|
00094  *           |             |        v array of ptr to 4M Mapping_tree's
00095  *           ---------------        -------------------------------
00096  *                                  | | | | | | | | | | | | | | | |
00097  *                                  ---|---------------------------
00098  *                                     |
00099  *                                     v a Mapping_tree
00100  *                                     ---------------
00101  *                                     |             | tree header
00102  *                                     |-------------|
00103  *                                     |             | Mapping
00104  *                                     |             |
00105  *                                     |             |
00106  *                                     |             |
00107  *                                     ---------------
00108  */
00109 
00110 #include <cassert>
00111 #include <cstring>
00112 
00113 #include <auto_ptr.h>
00114 
00115 #include "config.h"
00116 #include "globals.h"
00117 #include "helping_lock.h"
00118 #include "mapped_alloc.h"
00119 #include "kmem_slab.h"
00120 #include "std_macros.h"
00121 
00122 // 
00123 // Mapping_tree
00124 // 
00125 struct Mapping_tree
00126 {
00127   //friend class Mapdb;
00128   // DATA
00129   unsigned _count: 16;          
00130   unsigned _size_id: 4;         
00131   unsigned _empty_count: 11;    
00132                                 //   XXX currently never read, except in
00133                                 //   sanity checks
00134 
00135   unsigned _unused: 1;          // (make this 32 bits to avoid a compiler bug)
00136 
00137   Mapping _mappings[0] __attribute__((packed));
00138 
00139 public:  
00140   inline void* operator new (size_t, unsigned size_factor);
00141   
00142   inline void operator delete (void* block, size_t);
00143   
00144   //inline NEEDS[Mapping_depth, Mapping_tree::last]
00145   Mapping_tree(unsigned size_factor, unsigned page_number);
00146   
00147   //inline NEEDS[Mapping_depth, Mapping_tree::last]
00148   Mapping_tree(unsigned size_factor, Mapping_tree* from_tree);
00149   
00150   // public routines with inline implementations
00151   inline unsigned number_of_entries() const;
00152   
00153   inline Mapping * mappings();
00154   
00155   inline Mapping * end();
00156   
00157   inline Mapping * last();
00158   
00159   // This function copies the elements of mapping tree src to mapping
00160   // tree dst, ignoring empty elements (that is, compressing the
00161   // source tree.  In-place compression is supported.
00162   static void copy_compact_tree(Mapping_tree *dst, Mapping_tree *src);
00163   
00164   inline void check_integrity();
00165 };
00166 
00167 enum Mapping_depth 
00168 {
00169   Depth_root = 0, Depth_max = 252, 
00170   Depth_subtree = 253, Depth_empty = 254, Depth_end = 255 
00171 };
00172 
00173 // Helpers
00174 
00175 //
00176 // Mapping-tree allocators
00177 // 
00178 
00179 enum Mapping_tree_size
00180 {
00181   Size_factor = 4, 
00182   Size_id_max = 8               // can be up to 15 (4 bits)
00183 };
00184 
00185 class mapping_tree_allocators 
00186 {
00187   auto_ptr<Kmem_slab> _allocator [Size_id_max + 1];
00188 
00189   friend class foo;             // Avoid warning about not being constructible
00190 
00191 public:  
00192   inline Kmem_slab * allocator_for_treesize(int size);
00193   
00195   static inline mapping_tree_allocators& instance();
00196 
00197 private:  
00198   mapping_tree_allocators();
00199 };
00200 
00201 // 
00202 // Physframe
00203 // 
00204 
00206 class Physframe
00207 {
00208   friend class Mapdb;
00209   friend class Jdb_mapdb;
00210 
00211   // DATA
00212   auto_ptr<Mapping_tree> tree;
00213   //Mapping_tree* tree;
00214   Helping_lock lock;
00215 
00216   // CONSTRUCTORS
00217   Physframe ()
00218     // Optimization: do this using memset in operator new []
00219     //: tree (0)
00220   {}
00221 
00222   ~Physframe()
00223   {
00224     assert (! lock.test());
00225 
00226 #if 0
00227     if (tree)
00228       {
00229         // Find next-level trees.
00230         for (Mapping* m = tree->mappings(); 
00231              m;
00232              m = m->next (tree->end()))
00233           {
00234             if (m->subtree())
00235               delete m->subtree();
00236           }
00237       }
00238 #endif
00239   }
00240 
00241   void* operator new [] (size_t size)
00242     {
00243       size = (size + Config::PAGE_SIZE - 1) >> Config::PAGE_SHIFT;
00244       void* block = Mapped_allocator::allocator()->unaligned_alloc (size);
00245       if (block)
00246         memset(block, 0, size);  // Optimization: See constructor
00247       return block;
00248     }
00249 
00250   void operator delete [] (void* block, size_t size)
00251     {
00252       if (! block)
00253         return;
00254       size = (size + Config::PAGE_SIZE - 1) >> Config::PAGE_SHIFT;
00255       Mapped_allocator::allocator()->unaligned_free (size, block);
00256     }
00257 }; // struct Physframe
00258 
00259 //
00260 // IMPLEMENTATION of inline functions follows
00261 //
00262 
00263 
00264 
00265 
00266 inline Mapping::Mapping()
00267 {}
00268 
00269 
00274 inline bool
00275 Mapping::unused()
00276 {
00277   return data()->depth > Depth_subtree;
00278 }
00279 
00280 
00281 
00282 inline bool
00283 Mapping::is_end_tag()
00284 {
00285   return data()->depth == Depth_end;
00286 }
00287 
00288 
00289 
00290 inline Mapping *
00291 Mapping::next(const Mapping* end_of_tree)
00292 {
00293   for (Mapping* m = this + 1; 
00294        m < end_of_tree && ! m->is_end_tag(); 
00295        m++)
00296     {
00297       if (! m->unused())
00298         return m;
00299     }
00300 
00301   return 0;
00302 }
00303 
00304 
00305 
00306 static inline Kmem_slab *
00307 allocator_for_treesize(int size)
00308 {
00309   return mapping_tree_allocators::instance().allocator_for_treesize(size);
00310 }
00311 
00312 
00313 
00314 inline void*
00315 Mapping_tree::operator new (size_t, unsigned size_factor)
00316 {
00317   return allocator_for_treesize(size_factor)->alloc();
00318 }
00319 
00320 
00321 
00322 inline void
00323 Mapping_tree::operator delete (void* block, size_t)
00324 {
00325   if (!block)
00326     return;
00327 
00328   // Try to guess right allocator object -- XXX unportable!
00329   Mapping_tree* t = static_cast<Mapping_tree*>(block);
00330 
00331   t->check_integrity();
00332 
00333   allocator_for_treesize(t->_size_id)->free(block);
00334 }
00335 
00336 
00337 // public routines with inline implementations
00338 
00339 inline unsigned 
00340 Mapping_tree::number_of_entries() const
00341 {
00342   return Size_factor << _size_id;
00343 }
00344 
00345 
00346 
00347 inline Mapping *
00348 Mapping_tree::mappings()
00349 {
00350   return & _mappings[0];
00351 }
00352 
00353 
00354 
00355 inline Mapping *
00356 Mapping_tree::end()
00357 {
00358   return mappings() + number_of_entries();
00359 }
00360 
00361 
00362 
00363 inline Mapping *
00364 Mapping_tree::last()
00365 {
00366   return end() - 1;
00367 }
00368 
00369 
00370 
00371 inline void
00372 Mapping_tree::check_integrity()
00373 {
00374 #ifndef NDEBUG
00375   // Sanity checking
00376   assert (// Either each entry is used
00377           number_of_entries() == _count + _empty_count
00378           // Or the last used entry is end tag
00379           || mappings()[_count + _empty_count].is_end_tag());
00380 
00381   Mapping* m = mappings();
00382 
00383   assert (! m->unused()         // The first entry is never unused.
00384           && m->data()->space_is_sigma0()
00385           && m->data()->depth == 0);
00386 
00387   unsigned 
00388     used = 0,
00389     dead = 0;
00390 
00391   while (m < end()
00392            && ! m->is_end_tag())
00393     {
00394       if (m->unused())
00395         dead++;
00396       else
00397         used++;
00398 
00399       m++;
00400     }
00401 
00402   assert (_count == used);
00403   assert (_empty_count == dead);
00404 #endif // ! NDEBUG
00405 }
00406 
00407 
00408 
00409 inline Kmem_slab *
00410 mapping_tree_allocators::allocator_for_treesize(int size)
00411 {
00412   return _allocator[size].get();
00413 }
00414 
00415 
00418 inline mapping_tree_allocators&
00419 mapping_tree_allocators::instance()
00420 {
00421   static mapping_tree_allocators tree_allocators;
00422 
00423   return tree_allocators;
00424 }  
00425 
00426 #endif // mapdb_i_h

Generated on Mon Sep 26 14:20:11 2005 for Fiasco by  doxygen 1.4.2