WARNING: Project specifies a minimum meson_version '>= 0.47.0' but uses features which were added in newer versions:
* 0.50.0: {'install arg in configure_file'}
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit 0a27f96d1d)
This changes away from hard-coding the /tmp/launch-* path to now
supporting a generic <absolute path to unix socket>[.<screen>]
format for $DISPLAY.
cf-libxcb: d978a4f69b30b630f28d07f1003cf290284d24d8
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
CC: Adam Jackson <ajax@kemper.freedesktop.org>
(cherry picked from commit 83d0d91106)
The xserver fails to compile with the latest gcc 12:
render/picture.c: In function ‘CreateSolidPicture’:
render/picture.c:874:26: error: array subscript ‘union _SourcePict[0]’ is partly outside array bounds of ‘unsigned char[16]’ [-Werror=array-bounds]
874 | pPicture->pSourcePict->type = SourcePictTypeSolidFill;
| ^~
render/picture.c:868:45: note: object of size 16 allocated by ‘malloc’
868 | pPicture->pSourcePict = (SourcePictPtr) malloc(sizeof(PictSolidFill));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
render/picture.c: In function ‘CreateLinearGradientPicture’:
render/picture.c:906:26: error: array subscript ‘union _SourcePict[0]’ is partly outside array bounds of ‘unsigned char[32]’ [-Werror=array-bounds]
906 | pPicture->pSourcePict->linear.type = SourcePictTypeLinear;
| ^~
render/picture.c:899:45: note: object of size 32 allocated by ‘malloc’
899 | pPicture->pSourcePict = (SourcePictPtr) malloc(sizeof(PictLinearGradient));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
render/picture.c: In function ‘CreateConicalGradientPicture’:
render/picture.c:989:26: error: array subscript ‘union _SourcePict[0]’ is partly outside array bounds of ‘unsigned char[32]’ [-Werror=array-bounds]
989 | pPicture->pSourcePict->conical.type = SourcePictTypeConical;
| ^~
render/picture.c:982:45: note: object of size 32 allocated by ‘malloc’
982 | pPicture->pSourcePict = (SourcePictPtr) malloc(sizeof(PictConicalGradient));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
This is because gcc 12 has become stricter and raises a warning now.
Fix the warning/error by allocating enough memory to store the union
struct.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1256
(cherry picked from commit c6b0dcb82d)
For depth 30 in particular it's not uncommon for the DDX to not have
a configured pixmap format. Since the client expects to back both
GLXPixmaps and GLXPbuffers with X Pixmaps, trying to use an x2rgb10
fbconfig would fail along various paths to CreatePixmap. Filter these
fbconfigs out so the client can't ask for something that we know won't
work.
(cherry picked from commit f6c070a1ac)
If there is one platform device, which is not paused nor resumed,
systemd_logind_vtenter() will never get called.
This break suspend/resume, and switching to VT on system with Nvidia
proprietary driver.
This is a regression introduced by f5bd039633
So now call systemd_logind_vtenter() if there are no paused
platform devices.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1271
Fixes: f5bd0396 - xf86/logind: fix call systemd_logind_vtenter after receiving drm device resume
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
This fixes a crash when a DeviceEvent struct converted to
InteralEvent was beeing copied as InternalEvent (and thus
causing out of bounds reads) in ActivateGrabNoDelivery()
in events.c: 3876 *grabinfo->sync.event = *real_event;
Possible fix for https://gitlab.freedesktop.org/xorg/xserver/-/issues/1253
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
(cherry picked from commit 5b8817a019)
Initially reported downstream in Gentoo. Manifests with errors like:
```
gnu/bin/ld: hw/xfree86/common/libxorg_common.a(xf86fbBus.c.o): in function `xf86ClaimFbSlot':
xf86fbBus.c:(.text+0x20): undefined reference to `sbusSlotClaimed'
/usr/lib/gcc/sparc-unknown-linux-gnu/11.2.0/../../../../sparc-unknown-linux-gnu/bin/ld: xf86fbBus.c:(.text+0x2c): undefined reference to `sbusSlotClaimed'
```
While we use the headers in meson.build, we don't reference xf86sbusBus.c
which defines the missing symbols like sbusSlotClaimed.
Bug: https://bugs.gentoo.org/828513
Signed-off-by: Sam James <sam@gentoo.org>
(cherry picked from commit 6c1a1fcc4b)
ZDI-CAN-14192, CVE-2021-4008
This vulnerability was discovered and the fix was suggested by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit ebce7e2d80)
ZDI-CAN-14951, CVE-2021-4010
This vulnerability was discovered and the fix was suggested by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit 6c4c530107)
ZDI-CAN-14950, CVE-2021-4009
This vulnerability was discovered and the fix was suggested by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit b519675009)
ZDI-CAN-14952, CVE-2021-4011
This vulnerability was discovered and the fix was suggested by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit e56f61c79f)
With the new numbering scheme, XORG_VERISON_SNAP doesn't mean
a pre-release version anymore.
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
(cherry picked from commit 4de9666b6d)
Attempting to run fvwm on a x61/965gm with xserver 1.21.1 with the
modesetting driver on OpenBSD/amd64 would cause the xserver to
reliably crash.
I tracked this down to the free() calls introduced in
2906ee5e4a
(d1ca47e124 in branch).
clang also warns about this:
glamor_program.c:296:13: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:290:9: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:288:9: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:277:13: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:296:13: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:290:9: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:288:9: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:277:13: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
Signed-off-by: Jonathan Gray <jsg@jsg.id.au>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Fixes: 2906ee5e4 ("glamor: Fix leak in glamor_build_program()")
(cherry picked from commit 5ac6319776)
The previous if/else condition resulted in us always setting the key
type count to the current number of key types. Split this up correctly.
Regression introduced in de940e06f8Fixes#1249
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
(cherry picked from commit be16bd8543)
Quite a lot of applications currently expect the screen DPI exposed by
the X server to be 96 even when the real display DPI is different.
Additionally, currently Xwayland completely ignores any hardware
information and sets the DPI to 96. Accordingly the new behavior, even
if it fixes a bug, should not be enabled automatically to all users.
A better solution would be to make the default DPI stay as is and enable
the correct behavior with a command line option (maybe -dpi auto, or
similar). For now let's just revert the bug fix.
This reverts commit 05b3c681ea.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit 35af1299e7)
When switching to VT, the ioctl DRM_DROP_MASTER must be done before
the ioctl VT_RELDISP. Otherwise the kernel can't change the modesetting
reliably, and this leads to the console not showing up in some cases, like
after unplugging a docking station with a DP or HDMI monitor.
Before doing the VT_RELDISP, send a dbus message to logind, to
pause the drm device, so logind will do the ioctl DRM_DROP_MASTER.
With this patch, it changes the order logind will send the resume
event, and drm will be sent last instead of first.
so there is a also fix to call systemd_logind_vtenter() at the right time.
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
(cherry picked from commit da9d012a9c)
logind send the resume event for input devices and drm device,
in any order. if we call vt_enter before logind resume the drm device,
it leads to a driver error, because logind has not done the
DRM_IOCTL_SET_MASTER on it.
Keep the old workaround to make sure we call systemd_logind_vtenter at
least once if there are no platform device
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
(cherry picked from commit f5bd039633)
Due to a switched order of parameters in the xorg_list_add()
call inside ProcRRCreateLease(), adding a new lease for RandR
output leasing does not actually add the new RRLeasePtr lease
record to the list of existing leases for a X-Screen, but instead
replaces the existing list with a new list that has the new lease
as the only element, and probably leaks a bit of memory.
Therefore the server "forgets" all active leases for a screen,
except for the last added lease. If multiple leases are created
in a session, then destruction of all leases but the last one
will fail in many cases, e.g., during server shutdown in
RRCloseScreen(), or resource destruction, e.g., in
RRCrtcDestroyResource().
Most importantly, it fails if a client simply close(fd)'es the
DRM master descriptor to release a lease, quits, gets killed or
crashes. In this case the kernel will destroy the lease and shut
down the display output, then send a lease event via udev to the
ddx, which e.g., in the modesetting-ddx will trigger a call to
drmmode_validate_leases().
That function is supposed to detect the released lease and tell
the server to terminate the lease on the server side as well,
via xf86CrtcLeaseTerminated(), but this doesn't happen for all
the leases the server has forgotten. The end result is a dead
video output, as the server won't reinitialize the crtc's
corresponding to the terminated but forgotten lease.
This bug was observed when using the amdvlk AMD OSS Vulkan
driver and trying to lease multiple VKDisplay's, and also
under Mesa radv, as both Mesa Vulkan/WSI/Display and amdvlk
terminate leases by simply close()ing the lease fd, not by
sending explicit RandR protocol requests to free leases.
Leasing worked, but ending a session with multiple active
leases ended in a lot of unpleasant darkness.
Fixing the wrong argument order to xorg_list_add() fixes the
problem. Tested on single-X-Screen and dual-X-Screen setups,
with one, two or three active leases.
Please merge this for the upcoming server 21.1 branch.
Merging into server 1.20 would also make a lot of sense.
Fixes: e4e3447603
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Keith Packard <keithp@keithp.com>
(cherry picked from commit f467f85ca1)
Adding the offset between the realloc result and the old allocation to
update pointers into the new allocation is undefined behaviour: the
old pointers are no longer valid after realloc() according to the C
standard. While this works on almost all architectures and compilers,
it causes problems on architectures that track pointer bounds (e.g.
CHERI or Arm's Morello): the DevPrivateKey pointers will still have the
bounds of the previous allocation and therefore any dereference will
result in a run-time trap.
I found this due to a crash (dereferencing an invalid capability) while
trying to run `XVnc` on a CHERI-RISC-V system. With this commit I can
successfully connect to the XVnc instance running inside a QEMU with a
VNC viewer on my host.
This also changes the check whether the allocation was moved to use
uintptr_t instead of a pointer since according to the C standard:
"The value of a pointer becomes indeterminate when the object it
points to (or just past) reaches the end of its lifetime." Casting to an
integer type avoids this undefined behaviour.
Signed-off-by: Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
(cherry picked from commit f9f705bf3c)
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver
(cherry picked from commit b9218fadf3)
The "sync crtc" is the crtc used to drive the display timing of a
drawable under DRI2 and DRI3/Present. If a drawable intersects
multiple video outputs, then normally the crtc is chosen which has
the largest intersection area with the drawable.
If multiple outputs / crtc's have exacty the same intersection
area then the crtc chosen was simply the first one with maximum
intersection. Iow. the choice was random, depending on plugging
order of displays.
This adds the ability to choose a preferred output in such a tie
situation. The RandR output marked as "primary output" is chosen
on such a tie.
This new behaviour and its implementation is consistent with other
video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
reference. This commit is a straightforward port from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 4b75e65766)
In a setup with both VRR capable and non-VRR capable displays,
it was so far inconsistent if the driver would allow use of
VRR support or not, as "is_connector_vrr_capable" was set to
whatever the capabilities of the last added drm output were.
Iow. the plugging order of monitors determined the outcome.
Fix this: Now if at least one display is VRR capable, the driver
will treat an X-Screen as capable for VRR, plugging order no
longer matters.
Tested with a dual-display setup with one VRR monitor and one
non-VRR monitor. This is also beneficial with the new Option
"AsyncFlipSecondaries".
When we are at it, also add some so far missing description of
the "VariableRefresh" driver option, copied from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 017ce26337)
A lut size of 4096 slots has been verified to work correctly,
as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
large GAMMA_LUT size of 262145 slots, but also issues with its
current GAMMA_LUT implementation, as of Linux 5.14.
Therefore we keep GAMMA_LUT off for large lut's. This currently
excludes Intel Icelake, Tigerlake and later.
This can be overriden via the "UseGammaLUT" boolean xorg.conf option
to force use of GAMMA_LUT on or off.
See following link for the Tigerlake situation:
https://gitlab.freedesktop.org/drm/intel/-/issues/3916#note_1085315
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 66e5a5bb12)
Commit 446ff2d317 added checks to
prevalidate the size of incoming SetMap requests.
That commit checks for the XkbSetMapResizeTypes flag to be set before
allowing key types data to be processed.
key types data can be changed or even just sent wholesale unchanged
without the number of key types changing, however. The check for
XkbSetMapResizeTypes rejects those legitimate requests. In particular,
XkbChangeMap never sets XkbSetMapResizeTypes and so always fails now
any time XkbKeyTypesMask is in the changed mask.
This commit drops the check for XkbSetMapResizeTypes in flags when
prevalidating the request length.
(cherry picked from commit 8b7f4d3259)
This reverts commit 617f591fc4.
The problem described in that commit exists, but the two
preceeding commits with improvements to the servers RandR
code should avoid the mentioned problems while allowing the
use of GAMMA_LUT's instead of legacy gamma lut.
Use of legacy gamma lut's is not a good fix, because it will reduce
color output precision of gpu's with more than 1024 GAMMA_LUT
slots, e.g., AMD, ARM MALI and KOMEDA with 4096 slot luts,
and some Mediathek parts with 512 slot luts. On KOMEDA, legacy
lut's are completely unsupported by the kms driver, so gamma
correction gets disabled.
The situation is especially bad on Intel Icelake and later:
Use of legacy gamma tables will cause the kms driver to switch
to hardware legacy lut's with 256 slots, 8 bit wide, without
interpolation. This way color output precision is restricted to
8 bpc and any deep color / HDR output (10 bpc, fp16, fixed point 16)
becomes impossible. The latest Intel gen gpu's would have worse
color precision than parts which are more than 10 years old.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 545fa90cbf)
The assumption in the upsampling code was that the crtc->gamma_size
size of the crtc's gamma table is a power of two. This is true for
almost all current driver + gpu combos at least on Linux, with typical
sizes of 256, 512, 1024 or 4096 slots.
However, Intel Gen-11 Icelake and later are outliers, as their gamma
table has 2^18 + 1 slots, very big and not a power of two!
Try to make upsampling behave at least reasonable: Replicate the
last gamma value to fill up remaining crtc->gamma_red/green/blue
slots, which would normally stay uninitialized. This is important,
because while the intel display driver does not actually use all
2^18+1 values passed as part of a GAMMA_LUT, it does need the
very last slot, which would not get initialized by the old code.
This should hopefully create reasonable behaviour with Icelake+
but is untested on the actual Intel hw due to lack of suitable
hw.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 7326e131df)