We were storing the pointer to struct glamor_context. However, glamor
itself is storing the EGLContext pointer since the commit below. Since
the two values could never be equal, this resulted in constant
superfluous eglMakeCurrent calls. The implicit glFlush triggered by
those couldn't be good for performance.
Fixes: 7c88977d33 "glamor: Store the actual EGL/GLX context pointer in lastGLContext"
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
The code to clear a cursor pending frame callback was duplicated in
multiple places in the code.
Introduce a new xwl_cursor_clear_frame_cb() function and remove the
duplicated code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
It just make more sense to keep xwl_cursor_release() with the rest of
the cursor code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Two different functions in xwayland-cursor.c and xwayland-input.c use
the same name xwl_seat_update_cursor() which is confusing when reading
the code.
Rename xwl_seat_update_cursor() to xwl_seat_update_all_cursors() in
xwayland-cursor.c to help with readability of the code.
No functional change.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Passing -noTouchPointerEmulation results in an error about the
flag not being recognized.
Signed-off-by: Simon Ser <contact@emersion.fr>
Fixes: 7d34b1f2b7 ("xwayland: add -noTouchPointerEmulation")
If the tablet tool is moved out of proximity before the cursor's pending
frame callback is received, any further attempts to update the cursor
will fail because the frame callback is still pending.
Make sure to clear any cursor pending frame when the tool gets in
proximity again, similar to what we do when the pointer re-enters a
surface, so that the cursor updates aren't discarded.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
See-also: https://gitlab.gnome.org/GNOME/mutter/-/issues/1969
Reviewed-by: Carlos Garnacho <carlosg@gnome.org>
Some clients (typically Java, but maybe others) rely on ConfigureNotify
or RRScreenChangeNotify events to tell that the XRandR request is
successful.
When emulated XRandR is used in Xwayland, compute the emulated root size
and send the expected ConfigureNotify and RRScreenChangeNotify events
with the emulated size of the root window to the asking X11 client.
Note that the root window size does not actually change, as XRandR
emulation is achieved by scaling the client window using viewports in
Wayland, so this event is sort of misleading.
Also, because Xwayland is using viewports, emulating XRandR does not
reconfigure the outputs location, meaning that the actual size of the
root window which encompasses all the outputs together may not change
in a multi-monitor setup. To work around this limitation, when using an
emulated mode, we report the size of that emulated mode alone as the
root size for the configure notify event.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver
The "sync crtc" is the crtc used to drive the display timing of a
drawable under DRI2 and DRI3/Present. If a drawable intersects
multiple video outputs, then normally the crtc is chosen which has
the largest intersection area with the drawable.
If multiple outputs / crtc's have exacty the same intersection
area then the crtc chosen was simply the first one with maximum
intersection. Iow. the choice was random, depending on plugging
order of displays.
This adds the ability to choose a preferred output in such a tie
situation. The RandR output marked as "primary output" is chosen
on such a tie.
This new behaviour and its implementation is consistent with other
video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
reference. This commit is a straightforward port from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
In a setup with both VRR capable and non-VRR capable displays,
it was so far inconsistent if the driver would allow use of
VRR support or not, as "is_connector_vrr_capable" was set to
whatever the capabilities of the last added drm output were.
Iow. the plugging order of monitors determined the outcome.
Fix this: Now if at least one display is VRR capable, the driver
will treat an X-Screen as capable for VRR, plugging order no
longer matters.
Tested with a dual-display setup with one VRR monitor and one
non-VRR monitor. This is also beneficial with the new Option
"AsyncFlipSecondaries".
When we are at it, also add some so far missing description of
the "VariableRefresh" driver option, copied from amdgpu-ddx.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
A lut size of 4096 slots has been verified to work correctly,
as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
large GAMMA_LUT size of 262145 slots, but also issues with its
current GAMMA_LUT implementation, as of Linux 5.14.
Therefore we keep GAMMA_LUT off for large lut's. This currently
excludes Intel Icelake, Tigerlake and later.
This can be overriden via the "UseGammaLUT" boolean xorg.conf option
to force use of GAMMA_LUT on or off.
See following link for the Tigerlake situation:
https://gitlab.freedesktop.org/drm/intel/-/issues/3916#note_1085315
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
With the GBM backend becoming usable with different drivers such as
NVIDIA, set the GLVND vendor to the same value as the GBM backend name.
Mesa implementation however returns "drm" so we need to special case
this value - Basically, for anything other than "drm" we simply assume
that the GBM backend name is the same as the vendor.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: James Jones <jajones@nvidia.com>
Tested-by: James Jones <jajones@nvidia.com>
Xwayland was passing GBM bos directly to
eglCreateImageKHR using the EGL_NATIVE_PIXMAP_KHR
target. Given the EGL GBM platform spec claims it
is invalid to create a EGLSurface from a native
pixmap on the GBM platform, implying there is no
mapping between GBM objects and EGL's concept of
native pixmaps, this seems a bit questionable.
This change modifies the bo import function to
extract all the required data from the bo and then
imports it as a dma-buf instead when the dma-buf +
modifiers path is available.
Signed-off-by: James Jones <jajones@nvidia.com>
Reviewed-by: Simon Ser <contact@emersion.fr>
Xwayland's xwl_shm_create_pixmap() computes the size of the shared
memory pool to create using a size_t, yet the Wayland protocol uses an
integer for that size.
If the pool size becomes larger than INT32_MAX, we end up asking Wayland
to create a shared memory pool of negative size which in turn will raise
a protocol error which terminates the Wayland connection, and therefore
Xwayland.
Avoid that issue early by return a NULL pixmap in that case, which will
trigger a BadAlloc error, but leave Xwayland alive.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Jonas Ådahl <jadahl@gmail.com>
This reverts commit 617f591fc4.
The problem described in that commit exists, but the two
preceeding commits with improvements to the servers RandR
code should avoid the mentioned problems while allowing the
use of GAMMA_LUT's instead of legacy gamma lut.
Use of legacy gamma lut's is not a good fix, because it will reduce
color output precision of gpu's with more than 1024 GAMMA_LUT
slots, e.g., AMD, ARM MALI and KOMEDA with 4096 slot luts,
and some Mediathek parts with 512 slot luts. On KOMEDA, legacy
lut's are completely unsupported by the kms driver, so gamma
correction gets disabled.
The situation is especially bad on Intel Icelake and later:
Use of legacy gamma tables will cause the kms driver to switch
to hardware legacy lut's with 256 slots, 8 bit wide, without
interpolation. This way color output precision is restricted to
8 bpc and any deep color / HDR output (10 bpc, fp16, fixed point 16)
becomes impossible. The latest Intel gen gpu's would have worse
color precision than parts which are more than 10 years old.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
The assumption in the upsampling code was that the crtc->gamma_size
size of the crtc's gamma table is a power of two. This is true for
almost all current driver + gpu combos at least on Linux, with typical
sizes of 256, 512, 1024 or 4096 slots.
However, Intel Gen-11 Icelake and later are outliers, as their gamma
table has 2^18 + 1 slots, very big and not a power of two!
Try to make upsampling behave at least reasonable: Replicate the
last gamma value to fill up remaining crtc->gamma_red/green/blue
slots, which would normally stay uninitialized. This is important,
because while the intel display driver does not actually use all
2^18+1 values passed as part of a GAMMA_LUT, it does need the
very last slot, which would not get initialized by the old code.
This should hopefully create reasonable behaviour with Icelake+
but is untested on the actual Intel hw due to lack of suitable
hw.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
If randrp->palette_size is zero, the memcpy() path can read past the
end of the randr_crtc's gammaRed/Green/Blue tables if the hw crtc's
gamma_size is greater than the randr_crtc's gammaSize.
Avoid this by clamping the to-be-copied size to the smaller of both
sizes.
Note that during regular server startup, the memcpy() path is only
taken initially twice, but then a suitable palette is created for
use during a session. Therefore during an actual running X-Session,
the xf86RandR12CrtcComputeGamma() will be used, which makes sure that
data is properly up- or down-sampled for mismatching source and
target crtc gamma sizes.
This should avoid reading past randr_crtc gamma memory for gpu's
with big crtc->gamma_size, e.g., AMD/MALI/KOMEDA 4096 slots, or
Intel Icelake and later with 262145 slots.
Tested against modesetting-ddx and amdgpu-ddx under screen color
depth 24 (8 bpc) and 30 (10 bpc) to make sure that clamping happens
properly.
This is an alternative fix for the one attempted in commit
617f591fc4.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
We turn this on if the GL underneath us can enable GL_FRAMEBUFFER_SRGB.
We do try to generate both capable and incapable configs, which is to
keep llvmpipe working until the client side gets smarter about its srgb
capabilities.
xwl_present_reset_timer checks if the pending flip is synchronous, so
we need to call it after adding the pending flip to the flip queue.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1219
Fixes: b2a06e0700 "xwayland/present: Drop sync_flip member of struct xwl_present_window"
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Acked-by: Olivier Fourdan <ofourdan@redhat.com>
Rotation is broken for all drm drivers not providing hardware rotation
support. Drivers that give direct access to vram and not needing dirty
updates still work but only by accident. The problem is caused by
modesetting not sending the correct fb_id to drmModeDirtyFB() and
passing the damage rects in the rotated state and not as the crtc
expects them. This patch takes care of both problems.
Signed-off-by: Patrik Jakobsson <pjakobsson@suse.de>
This is not actually a change for xwayland with gbm, or for xfree86 with
big-GL, but we do change them as well to use EGL_NO_CONFIG_KHR
explicitly.
Reviewed-by: Emma Anholt <emma@anholt.net>
There's no real benefit to using GLX, and the other DDXes are using EGL
already, so let's converge on EGL so we can concentrate the fixes in one
place.
We go to some effort to avoid being the thing that requires libX11 here.
We prefer EGL_EXT_platform_xcb over _x11, and if forced to use the
latter we'll ask the dynamic linker for XGetXCBConnection and
XOpenDisplay rather than link against xlib stuff ourselves. Xephyr is
now a pure XCB application if it can be.
Reviewed-by: Emma Anholt <emma@anholt.net>
Due to a typo in tablet_pad_group(), we would allocate a variable
("group") and test another one ("pad") for allocation success.
Spotted by covscan.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Fixes: commit 8475e63 - "xwayland: add tablet pad support"
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
On screen init, if any of the private type registration fails we would
return FALSE without actually freeing the xwl_screen we just allocated.
This is not a serious leak as failure at that point would lead to the
premature termination of Xwayland at startup, but covscan complains and
it's easy enough to fix.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
The Linux version of xf86EnableIO calls a helper function called hwEnableIO().
Except on Alpha, this function reads /proc/ioports looking for the 'keyboard'
and 'timer' ports, extracts the port ranges, and enables access to them. It does
this by reading 4 bytes from the string for the start port number and 4 bytes
for the last port number, passing those to atoi(). However, it doesn't add a
fifth byte for a NUL terminator, so some implementations of atoi() read past the
end of this string, triggering an AddressSanitizer error:
==1383==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff71fd5b74 at pc 0x7fe1be0de3e0 bp 0x7fff71fd5ae0 sp 0x7fff71fd5288
READ of size 5 at 0x7fff71fd5b74 thread T0
#0 0x7fe1be0de3df in __interceptor_atoi /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520
#1 0x564971adcc45 in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:138
#2 0x564971adce87 in xf86EnableIO ../hw/xfree86/os-support/linux/lnx_video.c:174
#3 0x5649719f6a30 in InitOutput ../hw/xfree86/common/xf86Init.c:439
#4 0x564971585924 in dix_main ../dix/main.c:190
#5 0x564971b6246e in main ../dix/stubmain.c:34
#6 0x7fe1bdab6b24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
#7 0x564971490e9d in _start (/home/aaron/git/x/xserver/build.asan/hw/xfree86/Xorg+0xb2e9d)
Address 0x7fff71fd5b74 is located in stack of thread T0 at offset 100 in frame
#0 0x564971adc96a in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:118
This frame has 3 object(s):
[32, 40) 'n' (line 120)
[64, 72) 'buf' (line 122)
[96, 100) 'target' (line 122) <== Memory access at offset 100 overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520 in __interceptor_atoi
Shadow bytes around the buggy address:
0x10006e3f2b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10006e3f2b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10006e3f2b30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10006e3f2b40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10006e3f2b50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10006e3f2b60: 00 00 f1 f1 f1 f1 00 f2 f2 f2 00 f2 f2 f2[04]f3
0x10006e3f2b70: f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10006e3f2b80: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
0x10006e3f2b90: f1 f1 f8 f2 00 f2 f2 f2 f8 f3 f3 f3 00 00 00 00
0x10006e3f2ba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
0x10006e3f2bb0: f1 f1 00 f3 f3 f3 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==1383==ABORTING
Fix this by NUL-terminating the string.
Fixes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1193#note_1053306
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
GAMMA_LUT sizes other than 1024 cause a crash during startup if the memcpy()
calls in xf86RandR12CrtcSetGamma() read past the end of the legacy X11 /
XVidMode gamma ramp.
This is a problem on Intel ICL / GEN11 platforms because they report a GAMMA_LUT
size of 262145. Since it's not clear that the modesetting driver will generate a
proper gamma ramp at that size even if xf86RandR12CrtcSetGamma() is fixed, just
disable use of GAMMA_LUT for sizes other than 1024 for now. This will cause the
modesetting driver to disable the CTM property and fall back to the legacy gamma
LUT.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Fixes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1193
Tested-by: Mark Herbert
Whenever an unredirected fullscreen window uses pageflipping for a
DRI3/Present PresentPixmap() operation and the X-Screen has more than
one active output, multiple crtc's need to execute pageflips. Only
after the last flip has completed can the PresentPixmap operation
as a whole complete.
If a sync_flip is requested for the present, then the current
implementation will synchronize each pageflip to the vblank of
its associated crtc. This provides tear-free image presentation
across all outputs, but introduces a different artifact, if not
all outputs run at the same refresh rate with perfect synchrony:
The slowest output throttles the presentation rate, and present
completion is delayed to flip completion of the "latest" output
to complete. This means degraded performance, e.g., a dual-display
setup with a 144 Hz monitor and a 60 Hz monitor will always be
throttled to at most 60 fps. It also means non-constant present
rate if refresh cycles drift against each other, creating complex
"beat patterns", tremors, stutters and periodic slowdowns - quite
irritating!
Such a scenario will be especially annoying if one uses multiple
outputs in "mirror mode" aka "clone mode". One output will usually
be the "production output" with the highest quality and fastest
display attached, whereas a secondary mirror output just has a
cheaper display for monitoring attached. Users care about perfect
and perfectly timed tear-free presentation on the "production output",
but cares less about quality on the secondary "mirror output". They
are willing to trade quality on secondary outputs away in exchange
for better presentation timing on the "production output".
One example use case for such production + monitoring displays are
neuroscience / medical science applications where one high quality
display device is used to present visual animations to test subjects
or patients in a fMRI scanner room (production display), whereas
an operator monitors the same visual animations from a control room
on a lower quality display. Presentation timing needs to be perfect,
and animations high-speed and tear-free for the production display,
whereas quality and timing don't matter for the monitoring display.
This commit gives users the option to choose such a trade-off as
opt-in:
It adds a new boolean option "AsyncFlipSecondaries" to the device section
of xorg.conf. If this option is specified as true, then DRI3 pageflip
behaviour changes as follows:
1. The "reference crtc" for a windows PresentPixmap operation does a
vblank synced flip, or a DRM_MODE_PAGE_FLIP_ASYNC non-synchronized
flip, as requested by the caller, just as in the past. Typically
flips will be requested to be vblank synchronized for tear-free
presentation. The "reference crtc" is the one chosen by the caller
to drive presentation timing (as specified by PresentPixmap()'s
"target_msc", "divisor", "remainder" parameters and implemented by
vblank events) and to deliver Present completion timestamps (msc
and ust) extracted from its pageflip completion event.
2. All other crtc's, which also page-flip in a multi-display configuration,
will try to flip with DRM_MODE_PAGE_FLIP_ASYNC, ie. immediately and
not synchronized to vblank. This allows the PresentPixmap operation
to complete with little delay compared to a single-display present,
especially if the different crtc's run at different video refresh
rates or their refresh cycles are not perfectly synchronized, but
drift against each other. The downside is potential tearing artifacts
on all outputs apart from the one of the "reference crtc".
Successfully tested on a AMD gpu with single-display, dual-display and
triple-display setups, and with single-X-Screen as well as dual-X-Screen
"ZaphodHeads" configurations.
Please consider merging this commit for the upcoming server 1.21 branch.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
It turns out xdmx currently crashes when any client attempts to use GL
and it has been in such state for about 14 years. There was a patch to
fix the problem [1] 4 years ago, but it never got merged. The last
activity on any bugs referring to xdmx has been more than 4 years ago.
Given such situation, I find it unlikely that anyone is still using xdmx
and just having the code is a drain of resources.
[1]: https://lists.x.org/archives/xorg-devel/2017-June/053919.html
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
When using DRI3+Present with PRIME render offload, sometimes there is
a mismatch between the stride of the to-be-presented Pixmap and the
frontbuffer. The current code would reject a pageflip present in this
case if atomic modesetting is not enabled, ie. always, as atomic
modesetting is disabled by default due to brokeness in the current
modesetting-ddx.
Fullscreen presents without page flipping however trigger the copy
path as fallback, which causes not only unreliable presentation timing
and degraded performance, but also massive tearing artifacts due to
rendering to the framebuffer without any hardware sync to vblank.
Tearing is extra awful on modesetting-ddx because glamor afaics seems
to use drawing of a textured triangle strip for the copy implementation,
not a dedicated blitter engine. The rasterization pattern creates extra
awful tearing artifacts.
We can do better: According to a tip from Michel Daenzer (thanks!),
at least atomic modesetting capable kms drivers should be able to
reliably change scanout stride during a pageflip, even if atomic
modesetting is not actually enabled for the modesetting client.
This commit adds detection logic to find out if the underlying kms
driver is atomic_modeset_capable, and if so, it no longer rejects
page flip presents on mismatched stride between new Pixmap and
frontbuffer.
We (ab)use a call to drmSetClientCap(ms->fd, DRM_CLIENT_CAP_ATOMIC, 0);
for this purpose. The call itself has no practical effect, as it
requests disabling atomic mode, although atomic mode is disabled by
default. However, the return value of drmSetClientCap() tells us if the
underlying kms driver is atomic modesetting capable: An atomic driver
will return 0 for success. A legacy non-atomic driver will return a
non-zero error code, either -EINVAL for early atomic Linux versions
4.0 - 4.19 (or for non-atomic Linux 3.x and earlier), or -EOPNOTSUPP
for Linux 4.20 and later.
Testing on a MacBookPro 2017 with Intel Kabylake display server gpu +
AMD Polaris11 as prime renderoffload gpu, X-Server master + Mesa 21.0.3
show improvement from unbearable tearing to perfect, despite a stride
mismatch between display gpu and Pixmap of 11776 Bytes vs. 11520
Bytes. That this is correct behaviour was also confirmed by comparing the
behaviour and .check_flip implementation of the patched modesetting-ddx
against the current intel-ddx SNA Present implementation.
Please consider merging this patch before the server-1.21 branch point.
This patch could also be cherry-picked into the server 1.20 branch to
fix the same limitation.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
In some scenarios, the Wayland compositor might have more knowledge
than the X11 server and may be able to perform pointer emulation for
touch events better. Add a command-line switch to allow compositors
to turn Xwayland pointer emulation off.
Signed-off-by: Simon Ser <contact@emersion.fr>
A misplaced error check can cause this failure scenario, and does
so reliably as tested on Ubuntu 21.04 with KDE Plasma 5 desktop
within the first few seconds of login session startup, rendering
VRR under modesetting-ddx unusable:
1. Some X11 client application changes some window property.
2. ms_change_property() is called as part of the property change
handling call chain (client->requestVector[X_ChangeProperty]).
It removes itself temporarily from the call chain - or so it
thinks, hooking up saved_change_property instead.
3. ret = saved_change_property(client) is called and fails
temporarily for some non-critical reason.
4. The misplaced error check returns early (error abort), without
first restoring ms_change_property() as initial X_ChangeProperty
handler in the call chain again.
-> Now ms_change_property() has removed itself permanently from the
property handler call chain for the remainder of the X session
and VRR property changes on windows are no longer handled, ie.
VRR no longer gets enabled/disabled in response to window VRR
property changes.
Place the error check at the proper place, just as it is correctly
done by amdgpu-ddx, and in modesetting-ddx ms_delete_property()
function.
Verified to fix VRR handling with an AMD gpu under KDE desktop
session.
Please consider merging before branching the server 1.21 branch.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
The xf86CVTMode() was implemented in a standalone source file because it
was being used for both the xfree86 API and the standalone cvt utility.
Now that the cvt utility is removed (as part of libxcvt) we can move the
small xf86CVTMode() function with the rest of the xf86Modes sources.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
The cvt utility is now replaced by the standalone version found in
libxcvt, no need to build the one in xfree86 anymore.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Replace the local implementation of the VESA CVT standard timing
modelines generator with the one from libxct to avoid code duplication.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Xwayland is using a copy of the CVT generator found in Xorg.
Rather than duplicating the code within the xserver tree, use the
libxcvt implementation instead.
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
If there is an explicit configuration, assign the RandR provider
of the GPUDevice to the screen it was specified for.
If there is no configuration (default case) the screen number is
still 0 so it doesn't change behaviour.
The result is e.g:
# DISPLAY=:0.2 xrandr --listproviders
Providers: number : 2
Provider 0: id: 0xd2 cap: 0x2, Sink Output crtcs: 1 outputs: 1 associated providers: 0 name:modesetting
Provider 1: id: 0xfd cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 0 name:Intel
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>