Commit Graph

17729 Commits

Author SHA1 Message Date
Peter Hutterer 2c6989f81e xkb: fix XkbSetMap check for the keytypes count
The previous if/else condition resulted in us always setting the key
type count to the current number of key types. Split this up correctly.

Regression introduced in de940e06f8

Fixes #1249

Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
(cherry picked from commit be16bd8543)
2021-12-04 18:05:29 +02:00
Povilas Kanapickas 49444ce9f7 Revert "hw/xfree86: Propagate physical dimensions from DRM connector"
Quite a lot of applications currently expect the screen DPI exposed by
the X server to be 96 even when the real display DPI is different.
Additionally, currently Xwayland completely ignores any hardware
information and sets the DPI to 96. Accordingly the new behavior, even
if it fixes a bug, should not be enabled automatically to all users.

A better solution would be to make the default DPI stay as is and enable
the correct behavior with a command line option (maybe -dpi auto, or
similar). For now let's just revert the bug fix.

This reverts commit 05b3c681ea.

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
(cherry picked from commit 35af1299e7)
2021-12-03 03:10:06 +02:00
Dave Airlie 6f11b3c803 dri2: add crocus to the list of va_gl users
(cherry picked from commit a7b0a7fabd)
2021-12-03 03:10:06 +02:00
Jocelyn Falempe 8eb1396d3e xf86/logind: Fix drm_drop_master before vt_reldisp
When switching to VT, the ioctl DRM_DROP_MASTER must be done before
the ioctl VT_RELDISP. Otherwise the kernel can't change the modesetting
reliably, and this leads to the console not showing up in some cases, like
after unplugging a docking station with a DP or HDMI monitor.

Before doing the VT_RELDISP, send a dbus message to logind, to
pause the drm device, so logind will do the ioctl DRM_DROP_MASTER.

With this patch, it changes the order logind will send the resume
event, and drm will be sent last instead of first.
so there is a also fix to call systemd_logind_vtenter() at the right time.

Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
(cherry picked from commit da9d012a9c)
2021-12-03 00:46:11 +00:00
Jocelyn Falempe 6834f977a5 xf86/logind: fix call systemd_logind_vtenter after receiving drm device resume
logind send the resume event for input devices and drm device,
in any order. if we call vt_enter before logind resume the drm device,
it leads to a driver error, because logind has not done the
DRM_IOCTL_SET_MASTER on it.

Keep the old workaround to make sure we call systemd_logind_vtenter at
least once if there are no platform device

Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
(cherry picked from commit f5bd039633)
2021-12-03 00:46:11 +00:00
nerdopolis 0ed7b1224e xfree86: On Linux, while only seat0 can have TTYs, don't assmume all seat0s have TTYs
(cherry picked from commit b8c12aac65)
2021-11-23 13:01:55 -08:00
Povilas Kanapickas 6b997fb74e xserver 21.1.1
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-11-07 01:34:24 +02:00
Povilas Kanapickas 70a0c48ff3 Makefile.am: Add missing meson build files to release tarball 2021-11-06 21:22:23 +02:00
Matthieu Herrb db0da823f5 Make xf86CompatOutput() return NULL when there are no privates
Some drivers (mach64 w/o DRI for instance) don't initialize privates.

Signed-off-by: Matthieu Herrb <matthieu.herrb@laas.fr>
(cherry picked from commit 80eeff3eba)
2021-11-06 19:06:20 +02:00
Jon Turney fdb266ff81 s/__/@/ in inputtestdrv manpage
Update manpage subsitution style for 2e497bf8

(cherry picked from commit 0099412ea4)
2021-11-06 19:06:10 +02:00
Povilas Kanapickas 27a0ee32cc xserver 21.1
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-10-27 13:38:26 +03:00
Mario Kleiner d4944ceda1 Fix RandR leasing for more than 1 simultaneously active lease.
Due to a switched order of parameters in the xorg_list_add()
call inside ProcRRCreateLease(), adding a new lease for RandR
output leasing does not actually add the new RRLeasePtr lease
record to the list of existing leases for a X-Screen, but instead
replaces the existing list with a new list that has the new lease
as the only element, and probably leaks a bit of memory.

Therefore the server "forgets" all active leases for a screen,
except for the last added lease. If multiple leases are created
in a session, then destruction of all leases but the last one
will fail in many cases, e.g., during server shutdown in
RRCloseScreen(), or resource destruction, e.g., in
RRCrtcDestroyResource().

Most importantly, it fails if a client simply close(fd)'es the
DRM master descriptor to release a lease, quits, gets killed or
crashes. In this case the kernel will destroy the lease and shut
down the display output, then send a lease event via udev to the
ddx, which e.g., in the modesetting-ddx will trigger a call to
drmmode_validate_leases().

That function is supposed to detect the released lease and tell
the server to terminate the lease on the server side as well,
via xf86CrtcLeaseTerminated(), but this doesn't happen for all
the leases the server has forgotten. The end result is a dead
video output, as the server won't reinitialize the crtc's
corresponding to the terminated but forgotten lease.

This bug was observed when using the amdvlk AMD OSS Vulkan
driver and trying to lease multiple VKDisplay's, and also
under Mesa radv, as both Mesa Vulkan/WSI/Display and amdvlk
terminate leases by simply close()ing the lease fd, not by
sending explicit RandR protocol requests to free leases.

Leasing worked, but ending a session with multiple active
leases ended in a lot of unpleasant darkness.

Fixing the wrong argument order to xorg_list_add() fixes the
problem. Tested on single-X-Screen and dual-X-Screen setups,
with one, two or three active leases.

Please merge this for the upcoming server 21.1 branch.
Merging into server 1.20 would also make a lot of sense.

Fixes: e4e3447603
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Keith Packard <keithp@keithp.com>
(cherry picked from commit f467f85ca1)
2021-10-21 13:08:01 +03:00
Povilas Kanapickas e59faa4b8c xserver 21.1 RC 2 2021-10-14 17:33:40 +03:00
Alex Richardson 3fb94f3c5c dix/privates.c: Avoid undefined behaviour after realloc()
Adding the offset between the realloc result and the old allocation to
update pointers into the new allocation is undefined behaviour: the
old pointers are no longer valid after realloc() according to the C
standard. While this works on almost all architectures and compilers,
it causes  problems on architectures that track pointer bounds (e.g.
CHERI or Arm's Morello): the DevPrivateKey pointers will still have the
bounds of the previous allocation and therefore any dereference will
result in a run-time trap.

I found this due to a crash (dereferencing an invalid capability) while
trying to run `XVnc` on a CHERI-RISC-V system. With this commit I can
successfully connect to the XVnc instance running inside a QEMU with a
VNC viewer on my host.

This also changes the check whether the allocation was moved to use
uintptr_t instead of a pointer since according to the C standard:
"The value of a pointer becomes indeterminate when the object it
points to (or just past) reaches the end of its lifetime." Casting to an
integer type avoids this undefined behaviour.

Signed-off-by: Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
(cherry picked from commit f9f705bf3c)
2021-10-08 21:38:01 +03:00
nerdopolis b89fdd523e xf86: Accept devices with the 'simpledrm' driver.
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver

(cherry picked from commit b9218fadf3)
2021-10-08 21:38:00 +03:00
Mario Kleiner fbc690ccaf modesetting: Consider RandR primary output for selectioh of sync crtc.
The "sync crtc" is the crtc used to drive the display timing of a
drawable under DRI2 and DRI3/Present. If a drawable intersects
multiple video outputs, then normally the crtc is chosen which has
the largest intersection area with the drawable.

If multiple outputs / crtc's have exacty the same intersection
area then the crtc chosen was simply the first one with maximum
intersection. Iow. the choice was random, depending on plugging
order of displays.

This adds the ability to choose a preferred output in such a tie
situation. The RandR output marked as "primary output" is chosen
on such a tie.

This new behaviour and its implementation is consistent with other
video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
reference. This commit is a straightforward port from amdgpu-ddx.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 4b75e65766)
2021-10-08 21:37:59 +03:00
Mario Kleiner 22f4ff1026 modesetting: Handle mixed VRR and non-VRR display setups better.
In a setup with both VRR capable and non-VRR capable displays,
it was so far inconsistent if the driver would allow use of
VRR support or not, as "is_connector_vrr_capable" was set to
whatever the capabilities of the last added drm output were.
Iow. the plugging order of monitors determined the outcome.

Fix this: Now if at least one display is VRR capable, the driver
will treat an X-Screen as capable for VRR, plugging order no
longer matters.

Tested with a dual-display setup with one VRR monitor and one
non-VRR monitor. This is also beneficial with the new Option
"AsyncFlipSecondaries".

When we are at it, also add some so far missing description of
the "VariableRefresh" driver option, copied from amdgpu-ddx.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 017ce26337)
2021-10-08 21:37:58 +03:00
Mario Kleiner 0d0986bf3b modesetting: Enable GAMMA_LUT for lut's with up to 4096 slots.
A lut size of 4096 slots has been verified to work correctly,
as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
large GAMMA_LUT size of 262145 slots, but also issues with its
current GAMMA_LUT implementation, as of Linux 5.14.

Therefore we keep GAMMA_LUT off for large lut's. This currently
excludes Intel Icelake, Tigerlake and later.

This can be overriden via the "UseGammaLUT" boolean xorg.conf option
to force use of GAMMA_LUT on or off.

See following link for the Tigerlake situation:
https://gitlab.freedesktop.org/drm/intel/-/issues/3916#note_1085315

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 66e5a5bb12)
2021-10-08 21:37:57 +03:00
Ray Strode bc1327e6de xkb: Drop check for XkbSetMapResizeTypes
Commit 446ff2d317 added checks to
prevalidate the size of incoming SetMap requests.

That commit checks for the XkbSetMapResizeTypes flag to be set before
allowing key types data to be processed.

key types data can be changed or even just sent wholesale unchanged
without the number of key types changing, however. The check for
XkbSetMapResizeTypes rejects those legitimate requests. In particular,
XkbChangeMap never sets XkbSetMapResizeTypes and so always fails now
any time XkbKeyTypesMask is in the changed mask.

This commit drops the check for XkbSetMapResizeTypes in flags when
prevalidating the request length.

(cherry picked from commit 8b7f4d3259)
2021-10-08 21:37:56 +03:00
Mario Kleiner b87536682c Revert "modesetting: Only use GAMMA_LUT if its size is 1024"
This reverts commit 617f591fc4.

The problem described in that commit exists, but the two
preceeding commits with improvements to the servers RandR
code should avoid the mentioned problems while allowing the
use of GAMMA_LUT's instead of legacy gamma lut.

Use of legacy gamma lut's is not a good fix, because it will reduce
color output precision of gpu's with more than 1024 GAMMA_LUT
slots, e.g., AMD, ARM MALI and KOMEDA with 4096 slot luts,
and some Mediathek parts with 512 slot luts. On KOMEDA, legacy
lut's are completely unsupported by the kms driver, so gamma
correction gets disabled.

The situation is especially bad on Intel Icelake and later:
Use of legacy gamma tables will cause the kms driver to switch
to hardware legacy lut's with 256 slots, 8 bit wide, without
interpolation. This way color output precision is restricted to
8 bpc and any deep color / HDR output (10 bpc, fp16, fixed point 16)
becomes impossible. The latest Intel gen gpu's would have worse
color precision than parts which are more than 10 years old.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 545fa90cbf)
2021-10-08 21:37:55 +03:00
Mario Kleiner 473a48660f xfree86: Let xf86RandR12CrtcComputeGamma() deal with non-power-of-2 sizes.
The assumption in the upsampling code was that the crtc->gamma_size
size of the crtc's gamma table is a power of two. This is true for
almost all current driver + gpu combos at least on Linux, with typical
sizes of 256, 512, 1024 or 4096 slots.

However, Intel Gen-11 Icelake and later are outliers, as their gamma
table has 2^18 + 1 slots, very big and not a power of two!

Try to make upsampling behave at least reasonable: Replicate the
last gamma value to fill up remaining crtc->gamma_red/green/blue
slots, which would normally stay uninitialized. This is important,
because while the intel display driver does not actually use all
2^18+1 values passed as part of a GAMMA_LUT, it does need the
very last slot, which would not get initialized by the old code.

This should hopefully create reasonable behaviour with Icelake+
but is untested on the actual Intel hw due to lack of suitable
hw.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 7326e131df)
2021-10-08 21:37:54 +03:00
Mario Kleiner b33f487a7c xfree86: Avoid crash in xf86RandR12CrtcSetGamma() memcpy path.
If randrp->palette_size is zero, the memcpy() path can read past the
end of the randr_crtc's gammaRed/Green/Blue tables if the hw crtc's
gamma_size is greater than the randr_crtc's gammaSize.

Avoid this by clamping the to-be-copied size to the smaller of both
sizes.

Note that during regular server startup, the memcpy() path is only
taken initially twice, but then a suitable palette is created for
use during a session. Therefore during an actual running X-Session,
the xf86RandR12CrtcComputeGamma() will be used, which makes sure that
data is properly up- or down-sampled for mismatching source and
target crtc gamma sizes.

This should avoid reading past randr_crtc gamma memory for gpu's
with big crtc->gamma_size, e.g., AMD/MALI/KOMEDA 4096 slots, or
Intel Icelake and later with 262145 slots.

Tested against modesetting-ddx and amdgpu-ddx under screen color
depth 24 (8 bpc) and 30 (10 bpc) to make sure that clamping happens
properly.

This is an alternative fix for the one attempted in commit
617f591fc4.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
(cherry picked from commit 966f567432)
2021-10-08 21:37:53 +03:00
Olivier Fourdan d1ca47e124 glamor: Fix leak in glamor_build_program()
Fix the possible leak of `vs_prog_string` and `fs_prog_string` in case
of failure, as reported by covscan.

Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 2906ee5e4a)
2021-10-08 21:37:52 +03:00
Povilas Kanapickas d00e7dfc9d xserver 21.1 RC 1
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-09-21 23:37:48 +03:00
Povilas Kanapickas 4ee66f574a Drop XWayland DDX 2021-09-15 21:49:17 +03:00
Aaron Plattner 72c5d153c9 xfree86: NUL-terminate strings in hwEnableIO
The Linux version of xf86EnableIO calls a helper function called hwEnableIO().
Except on Alpha, this function reads /proc/ioports looking for the 'keyboard'
and 'timer' ports, extracts the port ranges, and enables access to them. It does
this by reading 4 bytes from the string for the start port number and 4 bytes
for the last port number, passing those to atoi(). However, it doesn't add a
fifth byte for a NUL terminator, so some implementations of atoi() read past the
end of this string, triggering an AddressSanitizer error:

  ==1383==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff71fd5b74 at pc 0x7fe1be0de3e0 bp 0x7fff71fd5ae0 sp 0x7fff71fd5288
  READ of size 5 at 0x7fff71fd5b74 thread T0
      #0 0x7fe1be0de3df in __interceptor_atoi /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520
      #1 0x564971adcc45 in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:138
      #2 0x564971adce87 in xf86EnableIO ../hw/xfree86/os-support/linux/lnx_video.c:174
      #3 0x5649719f6a30 in InitOutput ../hw/xfree86/common/xf86Init.c:439
      #4 0x564971585924 in dix_main ../dix/main.c:190
      #5 0x564971b6246e in main ../dix/stubmain.c:34
      #6 0x7fe1bdab6b24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
      #7 0x564971490e9d in _start (/home/aaron/git/x/xserver/build.asan/hw/xfree86/Xorg+0xb2e9d)

  Address 0x7fff71fd5b74 is located in stack of thread T0 at offset 100 in frame
      #0 0x564971adc96a in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:118

    This frame has 3 object(s):
      [32, 40) 'n' (line 120)
      [64, 72) 'buf' (line 122)
      [96, 100) 'target' (line 122) <== Memory access at offset 100 overflows this variable
  HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
        (longjmp and C++ exceptions *are* supported)
  SUMMARY: AddressSanitizer: stack-buffer-overflow /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520 in __interceptor_atoi
  Shadow bytes around the buggy address:
    0x10006e3f2b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x10006e3f2b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x10006e3f2b30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x10006e3f2b40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x10006e3f2b50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  =>0x10006e3f2b60: 00 00 f1 f1 f1 f1 00 f2 f2 f2 00 f2 f2 f2[04]f3
    0x10006e3f2b70: f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x10006e3f2b80: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
    0x10006e3f2b90: f1 f1 f8 f2 00 f2 f2 f2 f8 f3 f3 f3 00 00 00 00
    0x10006e3f2ba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
    0x10006e3f2bb0: f1 f1 00 f3 f3 f3 00 00 00 00 00 00 00 00 00 00
  Shadow byte legend (one shadow byte represents 8 application bytes):
    Addressable:           00
    Partially addressable: 01 02 03 04 05 06 07
    Heap left redzone:       fa
    Freed heap region:       fd
    Stack left redzone:      f1
    Stack mid redzone:       f2
    Stack right redzone:     f3
    Stack after return:      f5
    Stack use after scope:   f8
    Global redzone:          f9
    Global init order:       f6
    Poisoned by user:        f7
    Container overflow:      fc
    Array cookie:            ac
    Intra object redzone:    bb
    ASan internal:           fe
    Left alloca redzone:     ca
    Right alloca redzone:    cb
    Shadow gap:              cc
  ==1383==ABORTING

Fix this by NUL-terminating the string.

Fixes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1193#note_1053306
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
2021-09-10 11:20:47 -07:00
Aaron Plattner 617f591fc4 modesetting: Only use GAMMA_LUT if its size is 1024
GAMMA_LUT sizes other than 1024 cause a crash during startup if the memcpy()
calls in xf86RandR12CrtcSetGamma() read past the end of the legacy X11 /
XVidMode gamma ramp.

This is a problem on Intel ICL / GEN11 platforms because they report a GAMMA_LUT
size of 262145. Since it's not clear that the modesetting driver will generate a
proper gamma ramp at that size even if xf86RandR12CrtcSetGamma() is fixed, just
disable use of GAMMA_LUT for sizes other than 1024 for now. This will cause the
modesetting driver to disable the CTM property and fall back to the legacy gamma
LUT.

Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Fixes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1193
Tested-by: Mark Herbert
2021-09-09 22:45:06 -07:00
Povilas Kanapickas e59e24c877 glamor: Fix handling of 1-bit pixmaps
Since 8702c938b3 the pixmap formats are
handled in a single place. In the process of conversion the difference
between pixmap formats that can be uploaded and those that can be
rendered on GL side has been lost. This affects only 1-bit pixmaps: as
they aren't supported on GL, but can be converted to a R8 or A8 format
for rendering (see glamor_get_tex_format_type_from_pictformat()).

To work around this we add a separate flag that specifies whether the
format actually supports rendering in GL, convert all checks to use this
flag and then add 1-bit pixmap formats that don't support rendering in
GL.

Fixes: 8702c938b3
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1210
Acked-by: Olivier Fourdan <ofourdan@redhat.com>
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-09-09 23:59:06 +00:00
Mario Kleiner 68f01c0f02 modesetting: Add option for non-vsynced flips for "secondary" outputs.
Whenever an unredirected fullscreen window uses pageflipping for a
DRI3/Present PresentPixmap() operation and the X-Screen has more than
one active output, multiple crtc's need to execute pageflips. Only
after the last flip has completed can the PresentPixmap operation
as a whole complete.

If a sync_flip is requested for the present, then the current
implementation will synchronize each pageflip to the vblank of
its associated crtc. This provides tear-free image presentation
across all outputs, but introduces a different artifact, if not
all outputs run at the same refresh rate with perfect synchrony:
The slowest output throttles the presentation rate, and present
completion is delayed to flip completion of the "latest" output
to complete. This means degraded performance, e.g., a dual-display
setup with a 144 Hz monitor and a 60 Hz monitor will always be
throttled to at most 60 fps. It also means non-constant present
rate if refresh cycles drift against each other, creating complex
"beat patterns", tremors, stutters and periodic slowdowns - quite
irritating!

Such a scenario will be especially annoying if one uses multiple
outputs in "mirror mode" aka "clone mode". One output will usually
be the "production output" with the highest quality and fastest
display attached, whereas a secondary mirror output just has a
cheaper display for monitoring attached. Users care about perfect
and perfectly timed tear-free presentation on the "production output",
but cares less about quality on the secondary "mirror output". They
are willing to trade quality on secondary outputs away in exchange
for better presentation timing on the "production output".

One example use case for such production + monitoring displays are
neuroscience / medical science applications where one high quality
display device is used to present visual animations to test subjects
or patients in a fMRI scanner room (production display), whereas
an operator monitors the same visual animations from a control room
on a lower quality display. Presentation timing needs to be perfect,
and animations high-speed and tear-free for the production display,
whereas quality and timing don't matter for the monitoring display.

This commit gives users the option to choose such a trade-off as
opt-in:

It adds a new boolean option "AsyncFlipSecondaries" to the device section
of xorg.conf. If this option is specified as true, then DRI3 pageflip
behaviour changes as follows:

1. The "reference crtc" for a windows PresentPixmap operation does a
   vblank synced flip, or a DRM_MODE_PAGE_FLIP_ASYNC non-synchronized
   flip, as requested by the caller, just as in the past. Typically
   flips will be requested to be vblank synchronized for tear-free
   presentation. The "reference crtc" is the one chosen by the caller
   to drive presentation timing (as specified by PresentPixmap()'s
   "target_msc", "divisor", "remainder" parameters and implemented by
   vblank events) and to deliver Present completion timestamps (msc
   and ust) extracted from its pageflip completion event.

2. All other crtc's, which also page-flip in a multi-display configuration,
   will try to flip with DRM_MODE_PAGE_FLIP_ASYNC, ie. immediately and
   not synchronized to vblank. This allows the PresentPixmap operation
   to complete with little delay compared to a single-display present,
   especially if the different crtc's run at different video refresh
   rates or their refresh cycles are not perfectly synchronized, but
   drift against each other. The downside is potential tearing artifacts
   on all outputs apart from the one of the "reference crtc".

Successfully tested on a AMD gpu with single-display, dual-display and
triple-display setups, and with single-X-Screen as well as dual-X-Screen
"ZaphodHeads" configurations.

Please consider merging this commit for the upcoming server 1.21 branch.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
2021-09-09 09:53:21 +00:00
Ignacio Casal Quinteiro 1fd5dec11b touchevents: set the screen pointer after checking the device is enabled
If the device is disabled the sprite is NULL so we get a seg fault
2021-09-07 16:58:10 +02:00
Povilas Kanapickas b3b81c8c20 Drop DMX DDX
It turns out xdmx currently crashes when any client attempts to use GL
and it has been in such state for about 14 years. There was a patch to
fix the problem [1] 4 years ago, but it never got merged. The last
activity on any bugs referring to xdmx has been more than 4 years ago.

Given such situation, I find it unlikely that anyone is still using xdmx
and just having the code is a drain of resources.

[1]: https://lists.x.org/archives/xorg-devel/2017-June/053919.html

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-09-07 09:34:31 +00:00
Mario Kleiner 8f8ebf870b modesetting: Allow Present flips with mismatched stride on atomic drivers.
When using DRI3+Present with PRIME render offload, sometimes there is
a mismatch between the stride of the to-be-presented Pixmap and the
frontbuffer. The current code would reject a pageflip present in this
case if atomic modesetting is not enabled, ie. always, as atomic
modesetting is disabled by default due to brokeness in the current
modesetting-ddx.

Fullscreen presents without page flipping however trigger the copy
path as fallback, which causes not only unreliable presentation timing
and degraded performance, but also massive tearing artifacts due to
rendering to the framebuffer without any hardware sync to vblank.
Tearing is extra awful on modesetting-ddx because glamor afaics seems
to use drawing of a textured triangle strip for the copy implementation,
not a dedicated blitter engine. The rasterization pattern creates extra
awful tearing artifacts.

We can do better: According to a tip from Michel Daenzer (thanks!),
at least atomic modesetting capable kms drivers should be able to
reliably change scanout stride during a pageflip, even if atomic
modesetting is not actually enabled for the modesetting client.

This commit adds detection logic to find out if the underlying kms
driver is atomic_modeset_capable, and if so, it no longer rejects
page flip presents on mismatched stride between new Pixmap and
frontbuffer.

We (ab)use a call to drmSetClientCap(ms->fd, DRM_CLIENT_CAP_ATOMIC, 0);
for this purpose. The call itself has no practical effect, as it
requests disabling atomic mode, although atomic mode is disabled by
default. However, the return value of drmSetClientCap() tells us if the
underlying kms driver is atomic modesetting capable: An atomic driver
will return 0 for success. A legacy non-atomic driver will return a
non-zero error code, either -EINVAL for early atomic Linux versions
4.0 - 4.19 (or for non-atomic Linux 3.x and earlier), or -EOPNOTSUPP
for Linux 4.20 and later.

Testing on a MacBookPro 2017 with Intel Kabylake display server gpu +
AMD Polaris11 as prime renderoffload gpu, X-Server master + Mesa 21.0.3
show improvement from unbearable tearing to perfect, despite a stride
mismatch between display gpu and Pixmap of 11776 Bytes vs. 11520
Bytes. That this is correct behaviour was also confirmed by comparing the
behaviour and .check_flip implementation of the patched modesetting-ddx
against the current intel-ddx SNA Present implementation.

Please consider merging this patch before the server-1.21 branch point.
This patch could also be cherry-picked into the server 1.20 branch to
fix the same limitation.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
2021-09-07 09:21:39 +00:00
Simon Ser 7d34b1f2b7 xwayland: add -noTouchPointerEmulation
In some scenarios, the Wayland compositor might have more knowledge
than the X11 server and may be able to perform pointer emulation for
touch events better. Add a command-line switch to allow compositors
to turn Xwayland pointer emulation off.

Signed-off-by: Simon Ser <contact@emersion.fr>
2021-09-06 21:19:46 +00:00
Mario Kleiner 7c63c582a1 Revert "glamor: Enable modifier support for xfree86 too"
This reverts commit 9b89994110.

Turns out that defaulting glamor_egl->dmabuf_capable = TRUE
breaks kms page-flipping on various Mesa+Linux/DRM-KMS+hardware
combos, resulting in broken presentation timing, degraded performance
and awful tearing. E.g., my testing shows that X-Server master +
Mesa 21.2 + Linux 5.3 on Intel Kabylake has broken pageflipping.
Similar behaviour was observed in the past on RaspberryPi 4/400
with VideoCore-6 by myself and others, and iirc by myself on some
AMD gpu's, although my memories of the latter are a bit dim.

Cfe. https://gitlab.freedesktop.org/mesa/mesa/-/issues/3601 and
possibly https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/254
for related problems.

The reason for pageflip failure on the modesetting-ddx under
DRI3/Present seems to be the following sequence:

1. Atomic modesetting for the modesetting-ddx is broken and therefore
   both disabled by default in the modesetting-ddx itself and also
   force-disabled by the Linux kernel since quite a while. If the kernel
   detects drmSetClientCap(fd, DRM_CLIENT_CAP_ATOMIC, 1); from the
   X-Server, it will reject the request, as a countermeasure to all the
   past and current brokeness.

2. Without DRM_CLIENT_CAP_ATOMIC we don't get the implied universal
   planes support (DRM_CLIENT_CAP_UNIVERSAL_PLANES).

3. Without DRM_CLIENT_CAP_UNIVERSAL_PLANES, drmModeGetPlaneResources()
   will only return overlay planes, but not primary- or cursor planes.

4. As modesetting-ddx drmmode_crtc_create_planes() function can only
   operate on primary planes, but can't get any from drmModeGetPlaneResources(),
   the drmmode_crtc_create_planes() mostly turns into a no-op, never
   executes populate_format_modifiers() and therefore the Linux kernels
   DRM-KMS driver is not ever queried for the list of scanout/pageflip
   capable DRM format modifiers. Iow. the drmmode_crtc->formats[i].modifiers
   list stays empty with zero drmmode_crtc->formats[i].num_modifiers.

5. The list from step 4 provides the format+modifiers for intersection
   which would get returned by the X-Servers DRI3 backend as response to
   a xcb_dri3_get_supported_modifiers_window_modifiers() request. Given
   an empty list was returned in step 4, this will lead to return of an
   empty modifiers list by xcb_dri3_get_supported_modifiers_window_modifiers().

6. Both Mesa's DRI3/Present OpenGL backbuffer allocation logic and iirc
   Mesa/Vulkan/WSI/X11's swapchain image allocation logic use the list
   from xcb_dri3_get_supported_modifiers_window_modifiers() for format+
   modifier selection for scanout/pageflip capable buffers. Cfe. Mesa's
   dri3_alloc_render_buffer() function.

   Due to the empty list, the Mesa code falls back to the format+modifiers
   reported by xcb_dri3_get_supported_modifiers_screen_modifiers()
   instead. This list contains all modifiers reported by GLAMOR as
   result of glamor_get_formats() and glamor_get_modifiers(), which
   in turn are query results from Mesa eglQueryDmaBufFormatsEXT()
   and eglQueryDmaBufModifiersEXT(). Iow. all format+modifiers which
   are supported for rendering are considered for the OpenGL backbuffers
   and Vulkan swapchain buffers.

7. Depending on kms driver + gpu combo and Mesa version, such buffers
   are often not direct-scanout / pageflip capable, and so pageflipping
   can't be used for DRI3/Present of fullscreen windows. Whenever the
   system has to fallback to copies instead of pageflips, the results
   are broken presentation timing, degraded performance and quite
   horrible tearing, as the current DRI3/Present implementation does not
   perform any hardware synchronization of copy presents to the start
   of vblank or similar.

By defaulting glamor_egl->dmabuf_capable = FALSE instead, as the server
1.20 branch does, we avoid this failure:

1. glamor_get_modifiers() turns into a no-op and returns false, not
   reporting any supported dmabuf modifiers to the servers DRI3 code,
   ie. the servers cache_formats_and_modifiers() function can't retrieve
   and cache any format+modifiers. Therefore the servers DRI3 code now
   also reports an empty format+modifiers list when Mesa does a
   xcb_dri3_get_supported_modifiers_screen_modifiers() query.

2. Mesa's buffer allocation code therefore falls back to using the old
   DRI image extensions createImage() function to allocate buffers
   with use flags __DRI_IMAGE_USE_SCANOUT | __DRI_IMAGE_USE_BACKBUFFER
   and our OpenGL backbuffers / Vulkan swapchain images get allocated
   in a direct-scanout / pageflip capable format. Pageflipping works,
   timing and performance is good, presentation is tear-free.

Please consider merging this for branching the X-Server 1.21 branch.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
2021-09-01 18:51:02 +00:00
Jon Turney c5a9287dcf Don't underlink inputtest on targets which require complete linkage
Don't underlink inputtest on targets which require complete linkage
(e.g. when building for PE/COFF)
2021-09-01 13:57:40 +00:00
Jon Turney d68b50ec03 Fix compilation with windows.h from latest w32api
misc.h has complex logic (checking MAXSHORT is undefined etc.)
controlling if it includes assert.h or not.

Including windows.h from w32api 9.0.0 now trips over that, causing
assert.h to not be included, causing various errors, e.g.

In file included from ../include/cursor.h:53,
                 from ../include/dix.h:54,
                 from ../os/osdep.h:139,
                 from ../hw/xwin/winauth.c:40:
../include/privates.h: In function ‘dixGetPrivateAddr’:
../include/privates.h:121:5: error: implicit declaration of function ‘assert’ [-Werror=implicit-function-declaration]

Fix this by IWYU in privates.h
2021-08-31 16:56:28 +00:00
Mario Kleiner ab86be0ed9 modesetting: Fix VRR window property handling.
A misplaced error check can cause this failure scenario, and does
so reliably as tested on Ubuntu 21.04 with KDE Plasma 5 desktop
within the first few seconds of login session startup, rendering
VRR under modesetting-ddx unusable:

1. Some X11 client application changes some window property.

2. ms_change_property() is called as part of the property change
   handling call chain (client->requestVector[X_ChangeProperty]).
   It removes itself temporarily from the call chain - or so it
   thinks, hooking up saved_change_property instead.

3. ret = saved_change_property(client) is called and fails
   temporarily for some non-critical reason.

4. The misplaced error check returns early (error abort), without
   first restoring ms_change_property() as initial X_ChangeProperty
   handler in the call chain again.

-> Now ms_change_property() has removed itself permanently from the
   property handler call chain for the remainder of the X session
   and VRR property changes on windows are no longer handled, ie.
   VRR no longer gets enabled/disabled in response to window VRR
   property changes.

Place the error check at the proper place, just as it is correctly
done by amdgpu-ddx, and in modesetting-ddx ms_delete_property()
function.

Verified to fix VRR handling with an AMD gpu under KDE desktop
session.

Please consider merging before branching the server 1.21 branch.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
2021-08-27 18:26:59 +00:00
Povilas Kanapickas 18d3131f9a meson: Implement developer documentation build
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-08-20 10:26:07 +00:00
Povilas Kanapickas 722da1c62c meson: Add option to disable libdrm support
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
2021-08-20 07:11:44 +00:00
Adam Jackson 1f720dc9a3 dmx: Fix some redeclaration warnings from gcc 11
Of the form:

    ../hw/dmx/config/xdmxconfig.c:68:26: warning: redundant redeclaration of ‘dmxConfigEntry’ [-Wredundant-decls]
       68 | extern DMXConfigEntryPtr dmxConfigEntry;
          |                          ^~~~~~~~~~~~~~
2021-08-17 16:06:57 -04:00
Adam Jackson b49f0f9b32 xkb: Silence a warning from gcc 11
I get this:

    In function ‘TryCopyStr’,
        inlined from ‘CopyISOLockArgs’ at ../xkb/xkbtext.c:875:9:
    ../xkb/xkbtext.c:720:13: warning: ‘tbuf’ may be used uninitialized [-Wmaybe-uninitialized]
      720 |             strcat(to, from);
          |             ^~~~~~~~~~~~~~~~
    ../xkb/xkbtext.c: In function ‘CopyISOLockArgs’:
    <built-in>: note: by argument 1 of type ‘const char *’ to ‘__builtin_strlen’ declared here
    ../xkb/xkbtext.c:871:10: note: ‘tbuf’ declared here
      871 |     char tbuf[64];
          |          ^~~~

Just initialize tbuf so it definitely works.
2021-08-17 16:04:40 -04:00
Adam Jackson c1138d8ec8 xinput: Silence a warning from gcc 11
[45/388] Compiling C object Xi/liblibxserver_xi.a.p/xichangehierarchy.c.o
../Xi/xichangehierarchy.c:61:32: warning: argument 1 of type ‘int[256]’ with mismatched bound [-Warray-parameter=]
   61 | XISendDeviceHierarchyEvent(int flags[MAXDEVICES])
      |                            ~~~~^~~~~~~~~~~~~~~~~
In file included from ../Xi/xichangehierarchy.c:54:
../Xi/xichangehierarchy.h:42:37: note: previously declared as ‘int[]’
   42 | void XISendDeviceHierarchyEvent(int flags[]);
      |                                 ~~~~^~~~~~~
2021-08-17 16:02:44 -04:00
Adam Jackson f3a98334e6 selinux: Stop using security_context_t
This is apparently deprecated now and is and was always just char *.
2021-08-17 16:02:39 -04:00
Simon Ser 6f63873da5 xwayland: fix xdg_output leak
The xdg_output wasn't cleaned up when destroying the xwl_output.

Signed-off-by: Simon Ser <contact@emersion.fr>
2021-08-12 07:07:26 +00:00
Demi Marie Obenour 659260a0b7 More missing version checks in SProcs
The bug in XFixes was also found in GenericEvent and Damage.
2021-08-08 12:43:01 +00:00
Roman Gilg e4dabe8d93 Remove build-only include from public header
The dix-config.h file is not installed, but dix.h is. The include makes the
compilation of external drivers fail (for example the libinput driver).

The Xserver compilation also works without the include, so just remove it.

Signed-off-by: Roman Gilg <subdiff@gmail.com>
2021-08-07 14:26:26 +02:00
Demi Marie Obenour 31fb35a7af XFixes: add version check for byteswapped clients
The version check for byteswapped clients was missing.
2021-08-06 11:12:40 -04:00
Olivier Fourdan 7eba412072 xfree86: Move xf86CVTMode() function
The xf86CVTMode() was implemented in a standalone source file because it
was being used for both the xfree86 API and the standalone cvt utility.

Now that the cvt utility is removed (as part of libxcvt) we can move the
small xf86CVTMode() function with the rest of the xf86Modes sources.

Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2021-08-06 11:29:29 +00:00
Olivier Fourdan f2781e9536 xfree86/cvt: Drop cvt utility
The cvt utility is now replaced by the standalone version found in
libxcvt, no need to build the one in xfree86 anymore.

Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2021-08-06 11:29:29 +00:00
Olivier Fourdan f788289bdc xfree86: Use libxcvt
Replace the local implementation of the VESA CVT standard timing
modelines generator with the one from libxct to avoid code duplication.

Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1142
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2021-08-06 11:29:29 +00:00