ZDI-CAN-14952, CVE-2021-4011
This vulnerability was discovered and the fix was suggested by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
This commit allows X11 clients running through Xwayland to lease
non-desktop connectors from the Wayland compositor by implementing
support for drm-lease-v1.
In order to not deadlock with the Wayland compositor if its response
to a lease request is delayed, the new interface in _rrScrPriv
introduced in the last commit is used, which makes it possible to
block the X11 client while a response is pending.
Leasing normal outputs is not yet supported, all connectors offered
for lease will be advertised as non-desktop.
Co-authored-by: Xaver Hugl <xaver.hugl@gmail.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Acked-by: Olivier Fourdan <ofourdan@redhat.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
Add a new interface to _rrScrPriv to make it possible for the server to
delay answering a lease request, at the cost of blocking the client. This
is needed for implementing drm-lease-v1, as the Wayland protocol has no
defined time table for responding to lease requests.
Signed-off-by: Xaver Hugl <xaver.hugl@gmail.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
This was overlooked when converting the function to use libxcvt.
Bring back name initialization from old code.
This was causing a segfault in xf86LookupMode() if modes where
name is NULL are present the modePool list.
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
The compositor may send us wl_seat and its capabilities before sending
e.g. relative_pointer_manager or pointer_gesture interfaces. This would
result in devices being created in capabilities handler, but listeners
not, because the interfaces weren't available at the time. So we
manually attempt to setup listeners again.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
The implementation is relatively straightforward because both wayland
and Xorg use libinput semantics for touchpad gestures.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Attempting to run fvwm on a x61/965gm with xserver 1.21.1 with the
modesetting driver on OpenBSD/amd64 would cause the xserver to
reliably crash.
I tracked this down to the free() calls introduced in
2906ee5e4a
(d1ca47e124 in branch).
clang also warns about this:
glamor_program.c:296:13: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:290:9: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:288:9: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:277:13: warning: variable 'vs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:296:13: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:290:9: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:288:9: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
glamor_program.c:277:13: warning: variable 'fs_prog_string' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
Signed-off-by: Jonathan Gray <jsg@jsg.id.au>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Fixes: 2906ee5e4 ("glamor: Fix leak in glamor_build_program()")
The previous if/else condition resulted in us always setting the key
type count to the current number of key types. Split this up correctly.
Regression introduced in de940e06f8Fixes#1249
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Currently, when given the choice, Xwayland will pick the GBM backend
over the EGLstream backend if both are available, unless the command
line option “-eglstream” is specified.
The NVIDIA proprietary driver had no support for GBM until driver series
495, but starting with the driver series 495, both can be used.
But there are other requirements with the rest of the stack, typically
Mesa, egl-wayland, libglvnd as documented in the NVIDIA driver.
So if the NVIDIA driver series 495 gets installed, Xwayland will pick
the GBM backend even if EGLstream is available and may fail to render
properly.
To avoid that issue, prefer EGLstream if EGLstream and all the Wayland
interfaces are available, and fallback to GBM automatically unless
“-eglstream” was specified.
With this, the compositor, given the choice, can decide which actual
backend Xwayland would use by advertising (or not) the Wayland
"wl_eglstream_controller" interface.
This change has no impact on compositors which do not have support for
EGLstream in the first place.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
Add (verbose) statements to trace the actual backend used with glamor.
That can be useful for debugging.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
On a normal startup sequence, the Xwayland glamor backend would log
an error whenever a required Wayland protocol is missing.
Those are not really errors though, more informational messages along
the glamor backend selection process.
Demote those errors to verbose messages to reduce the verbosity of
Xwayland at startup by default.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Jonas Ådahl <jadahl@gmail.com>
If no EGLstream capable device is found at startup, Xwayland's EGLstream
backend will log an error message "glamor: No eglstream capable devices
found".
However, considering that the vast majority of drivers do not implement
EGLstream, the lack of EGLstream capable device is more of the norm than
the exception.
Change the error message to a log verbose message.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Jonas Ådahl <jadahl@gmail.com>
When switching to VT, the ioctl DRM_DROP_MASTER must be done before
the ioctl VT_RELDISP. Otherwise the kernel can't change the modesetting
reliably, and this leads to the console not showing up in some cases, like
after unplugging a docking station with a DP or HDMI monitor.
Before doing the VT_RELDISP, send a dbus message to logind, to
pause the drm device, so logind will do the ioctl DRM_DROP_MASTER.
With this patch, it changes the order logind will send the resume
event, and drm will be sent last instead of first.
so there is a also fix to call systemd_logind_vtenter() at the right time.
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
logind send the resume event for input devices and drm device,
in any order. if we call vt_enter before logind resume the drm device,
it leads to a driver error, because logind has not done the
DRM_IOCTL_SET_MASTER on it.
Keep the old workaround to make sure we call systemd_logind_vtenter at
least once if there are no platform device
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Quite a lot of applications currently expect the screen DPI exposed by
the X server to be 96 even when the real display DPI is different.
Additionally, currently Xwayland completely ignores any hardware
information and sets the DPI to 96. Accordingly the new behavior, even
if it fixes a bug, should not be enabled automatically to all users.
A better solution would be to make the default DPI stay as is and enable
the correct behavior with a command line option (maybe -dpi auto, or
similar). For now let's just revert the bug fix.
This reverts commit 05b3c681ea.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Install libxcvt build dep on appveyor.
Explicitly install python3.8 lxml to ensure it matches python version
installed (to workaround issues with Cygwin installer).
Drop explicit configuration of hal and udev, as meson.build now knows to
turn those off for Cygwin.
We were storing the pointer to struct glamor_context. However, glamor
itself is storing the EGLContext pointer since the commit below. Since
the two values could never be equal, this resulted in constant
superfluous eglMakeCurrent calls. The implicit glFlush triggered by
those couldn't be good for performance.
Fixes: 7c88977d33 "glamor: Store the actual EGL/GLX context pointer in lastGLContext"
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
The code to clear a cursor pending frame callback was duplicated in
multiple places in the code.
Introduce a new xwl_cursor_clear_frame_cb() function and remove the
duplicated code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
It just make more sense to keep xwl_cursor_release() with the rest of
the cursor code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Two different functions in xwayland-cursor.c and xwayland-input.c use
the same name xwl_seat_update_cursor() which is confusing when reading
the code.
Rename xwl_seat_update_cursor() to xwl_seat_update_all_cursors() in
xwayland-cursor.c to help with readability of the code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Passing -noTouchPointerEmulation results in an error about the
flag not being recognized.
Signed-off-by: Simon Ser <contact@emersion.fr>
Fixes: 7d34b1f2b7 ("xwayland: add -noTouchPointerEmulation")
The xwayland-piglit.sh script spawns weston, runs run-piglit.sh and
finally kills weston.
However, this whole script is running with “-e” meaning that any error
will cause the script to exit immediately.
As a result, if run-piglit.sh exits with a non-zero code such as 77 for
skipping the test, the script will exit prematurely leaving weston
running, and meson will simply wait until the timeout kicks in, and
fail eventually instead of skipping the test as it should.
Fix this by removing the option to exit immediately prior to spawn the
script run-piglit.sh.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1204
Suggested-by: Michel Dänzer <mdaenzer@redhat.com>
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
If the tablet tool is moved out of proximity before the cursor's pending
frame callback is received, any further attempts to update the cursor
will fail because the frame callback is still pending.
Make sure to clear any cursor pending frame when the tool gets in
proximity again, similar to what we do when the pointer re-enters a
surface, so that the cursor updates aren't discarded.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
See-also: https://gitlab.gnome.org/GNOME/mutter/-/issues/1969
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Due to a switched order of parameters in the xorg_list_add()
call inside ProcRRCreateLease(), adding a new lease for RandR
output leasing does not actually add the new RRLeasePtr lease
record to the list of existing leases for a X-Screen, but instead
replaces the existing list with a new list that has the new lease
as the only element, and probably leaks a bit of memory.
Therefore the server "forgets" all active leases for a screen,
except for the last added lease. If multiple leases are created
in a session, then destruction of all leases but the last one
will fail in many cases, e.g., during server shutdown in
RRCloseScreen(), or resource destruction, e.g., in
RRCrtcDestroyResource().
Most importantly, it fails if a client simply close(fd)'es the
DRM master descriptor to release a lease, quits, gets killed or
crashes. In this case the kernel will destroy the lease and shut
down the display output, then send a lease event via udev to the
ddx, which e.g., in the modesetting-ddx will trigger a call to
drmmode_validate_leases().
That function is supposed to detect the released lease and tell
the server to terminate the lease on the server side as well,
via xf86CrtcLeaseTerminated(), but this doesn't happen for all
the leases the server has forgotten. The end result is a dead
video output, as the server won't reinitialize the crtc's
corresponding to the terminated but forgotten lease.
This bug was observed when using the amdvlk AMD OSS Vulkan
driver and trying to lease multiple VKDisplay's, and also
under Mesa radv, as both Mesa Vulkan/WSI/Display and amdvlk
terminate leases by simply close()ing the lease fd, not by
sending explicit RandR protocol requests to free leases.
Leasing worked, but ending a session with multiple active
leases ended in a lot of unpleasant darkness.
Fixing the wrong argument order to xorg_list_add() fixes the
problem. Tested on single-X-Screen and dual-X-Screen setups,
with one, two or three active leases.
Please merge this for the upcoming server 21.1 branch.
Merging into server 1.20 would also make a lot of sense.
Fixes: e4e3447603
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Keith Packard <keithp@keithp.com>
Some clients (typically Java, but maybe others) rely on ConfigureNotify
or RRScreenChangeNotify events to tell that the XRandR request is
successful.
When emulated XRandR is used in Xwayland, compute the emulated root size
and send the expected ConfigureNotify and RRScreenChangeNotify events
with the emulated size of the root window to the asking X11 client.
Note that the root window size does not actually change, as XRandR
emulation is achieved by scaling the client window using viewports in
Wayland, so this event is sort of misleading.
Also, because Xwayland is using viewports, emulating XRandR does not
reconfigure the outputs location, meaning that the actual size of the
root window which encompasses all the outputs together may not change
in a multi-monitor setup. To work around this limitation, when using an
emulated mode, we report the size of that emulated mode alone as the
root size for the configure notify event.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Adding the offset between the realloc result and the old allocation to
update pointers into the new allocation is undefined behaviour: the
old pointers are no longer valid after realloc() according to the C
standard. While this works on almost all architectures and compilers,
it causes problems on architectures that track pointer bounds (e.g.
CHERI or Arm's Morello): the DevPrivateKey pointers will still have the
bounds of the previous allocation and therefore any dereference will
result in a run-time trap.
I found this due to a crash (dereferencing an invalid capability) while
trying to run `XVnc` on a CHERI-RISC-V system. With this commit I can
successfully connect to the XVnc instance running inside a QEMU with a
VNC viewer on my host.
This also changes the check whether the allocation was moved to use
uintptr_t instead of a pointer since according to the C standard:
"The value of a pointer becomes indeterminate when the object it
points to (or just past) reaches the end of its lifetime." Casting to an
integer type avoids this undefined behaviour.
Signed-off-by: Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver
The "sync crtc" is the crtc used to drive the display timing of a
drawable under DRI2 and DRI3/Present. If a drawable intersects
multiple video outputs, then normally the crtc is chosen which has
the largest intersection area with the drawable.
If multiple outputs / crtc's have exacty the same intersection
area then the crtc chosen was simply the first one with maximum
intersection. Iow. the choice was random, depending on plugging
order of displays.
This adds the ability to choose a preferred output in such a tie
situation. The RandR output marked as "primary output" is chosen
on such a tie.
This new behaviour and its implementation is consistent with other
video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
reference. This commit is a straightforward port from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
In a setup with both VRR capable and non-VRR capable displays,
it was so far inconsistent if the driver would allow use of
VRR support or not, as "is_connector_vrr_capable" was set to
whatever the capabilities of the last added drm output were.
Iow. the plugging order of monitors determined the outcome.
Fix this: Now if at least one display is VRR capable, the driver
will treat an X-Screen as capable for VRR, plugging order no
longer matters.
Tested with a dual-display setup with one VRR monitor and one
non-VRR monitor. This is also beneficial with the new Option
"AsyncFlipSecondaries".
When we are at it, also add some so far missing description of
the "VariableRefresh" driver option, copied from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
A lut size of 4096 slots has been verified to work correctly,
as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
large GAMMA_LUT size of 262145 slots, but also issues with its
current GAMMA_LUT implementation, as of Linux 5.14.
Therefore we keep GAMMA_LUT off for large lut's. This currently
excludes Intel Icelake, Tigerlake and later.
This can be overriden via the "UseGammaLUT" boolean xorg.conf option
to force use of GAMMA_LUT on or off.
See following link for the Tigerlake situation:
https://gitlab.freedesktop.org/drm/intel/-/issues/3916#note_1085315
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Commit 446ff2d317 added checks to
prevalidate the size of incoming SetMap requests.
That commit checks for the XkbSetMapResizeTypes flag to be set before
allowing key types data to be processed.
key types data can be changed or even just sent wholesale unchanged
without the number of key types changing, however. The check for
XkbSetMapResizeTypes rejects those legitimate requests. In particular,
XkbChangeMap never sets XkbSetMapResizeTypes and so always fails now
any time XkbKeyTypesMask is in the changed mask.
This commit drops the check for XkbSetMapResizeTypes in flags when
prevalidating the request length.
With the GBM backend becoming usable with different drivers such as
NVIDIA, set the GLVND vendor to the same value as the GBM backend name.
Mesa implementation however returns "drm" so we need to special case
this value - Basically, for anything other than "drm" we simply assume
that the GBM backend name is the same as the vendor.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: James Jones <jajones@nvidia.com>
Tested-by: James Jones <jajones@nvidia.com>