dix: Fix overzealous caching of ResourceClientBits()
Commitc7311654
cached the value of ResourceClientBits(), but that value depends on the `MaxClients` value set either from the command line or from the configuration file. For the latter, a call to ResourceClientBits() is issued before the configuration file is read, meaning that the cached value is from the default, not from the maximum number of clients set in the configuration file. That obviously causes all sort of issues, including memory corruption and crashes of the Xserver when reaching the default limit value. To avoid that issue, also keep the LimitClient value, and recompute the ilog2() value if that changes, as on startup when the value is set from the the xorg.conf ServerFlags section. v2: Drop the `cache == 0` test Rename cache vars Fixes:c7311654
- dix: cache ResourceClientBits() value Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1310 Signed-off-by: Olivier Fourdan <ofourdan@redhat.com> Reviewed-by: Adam Jackson <ajax@redhat.com>
This commit is contained in:
parent
24d7d93ff2
commit
2efa6d6595
|
@ -620,12 +620,15 @@ ilog2(int val)
|
|||
unsigned int
|
||||
ResourceClientBits(void)
|
||||
{
|
||||
static unsigned int cached = 0;
|
||||
static unsigned int cache_ilog2 = 0;
|
||||
static unsigned int cache_limit = 0;
|
||||
|
||||
if (cached == 0)
|
||||
cached = ilog2(LimitClients);
|
||||
if (LimitClients != cache_limit) {
|
||||
cache_limit = LimitClients;
|
||||
cache_ilog2 = ilog2(LimitClients);
|
||||
}
|
||||
|
||||
return cached;
|
||||
return cache_ilog2;
|
||||
}
|
||||
|
||||
/*****************
|
||||
|
|
Loading…
Reference in New Issue