Changes check for trying modesetting driver from if defined(__linux__)
to use meson check for if we built the driver for this platform.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
I wanted to simplify the logic, and thought this is a good opportunity
to eliminate local diffs.
I don't want to list OSes without wsfb, because I understand that is a
netbsd/openbsd driver, and always have it as a fallback for us.
Additionally, I understand "fbdev" is linux-specific, so have the logic
match this intent.
Finishes the work started in commit cd0d4c1bb5
to remove checks for the variable that never varied from 0 after the code
to change it was removed by commit 511c60bc73
in 2006 (xorg-server-1.2.0).
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Correctness is ensured be checking md5sum result before and after the
commit (it's the same).
Fixes LGTM warning "Comparison is always false because numTimings <= 0."
Signed-off-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
Put in a workaround to accept devices of the kernel's hyperv_drm
driver. Makes Xorg work on HyperV Gen 1/2 with the DRM graphics
stack.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Initially reported downstream in Gentoo. Manifests with errors like:
```
gnu/bin/ld: hw/xfree86/common/libxorg_common.a(xf86fbBus.c.o): in function `xf86ClaimFbSlot':
xf86fbBus.c:(.text+0x20): undefined reference to `sbusSlotClaimed'
/usr/lib/gcc/sparc-unknown-linux-gnu/11.2.0/../../../../sparc-unknown-linux-gnu/bin/ld: xf86fbBus.c:(.text+0x2c): undefined reference to `sbusSlotClaimed'
```
While we use the headers in meson.build, we don't reference xf86sbusBus.c
which defines the missing symbols like sbusSlotClaimed.
Bug: https://bugs.gentoo.org/828513
Signed-off-by: Sam James <sam@gentoo.org>
Add a new interface to _rrScrPriv to make it possible for the server to
delay answering a lease request, at the cost of blocking the client. This
is needed for implementing drm-lease-v1, as the Wayland protocol has no
defined time table for responding to lease requests.
Signed-off-by: Xaver Hugl <xaver.hugl@gmail.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
When switching to VT, the ioctl DRM_DROP_MASTER must be done before
the ioctl VT_RELDISP. Otherwise the kernel can't change the modesetting
reliably, and this leads to the console not showing up in some cases, like
after unplugging a docking station with a DP or HDMI monitor.
Before doing the VT_RELDISP, send a dbus message to logind, to
pause the drm device, so logind will do the ioctl DRM_DROP_MASTER.
With this patch, it changes the order logind will send the resume
event, and drm will be sent last instead of first.
so there is a also fix to call systemd_logind_vtenter() at the right time.
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Quite a lot of applications currently expect the screen DPI exposed by
the X server to be 96 even when the real display DPI is different.
Additionally, currently Xwayland completely ignores any hardware
information and sets the DPI to 96. Accordingly the new behavior, even
if it fixes a bug, should not be enabled automatically to all users.
A better solution would be to make the default DPI stay as is and enable
the correct behavior with a command line option (maybe -dpi auto, or
similar). For now let's just revert the bug fix.
This reverts commit 05b3c681ea.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver
If there is an explicit configuration, assign the RandR provider
of the GPUDevice to the screen it was specified for.
If there is no configuration (default case) the screen number is
still 0 so it doesn't change behaviour.
The result is e.g:
# DISPLAY=:0.2 xrandr --listproviders
Providers: number : 2
Provider 0: id: 0xd2 cap: 0x2, Sink Output crtcs: 1 outputs: 1 associated providers: 0 name:modesetting
Provider 1: id: 0xfd cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 0 name:Intel
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>
xf86_platform_devices[i].pdev may be NULL in cases we fail to parse the
busid in config_udev_odev_setup_attribs() (see also [1], [2]) such as
when udev does not give use ID_PATH. This in turn leads to
platform_find_pci_info() being not called and pdev being NULL.
[1]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/993
[2]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1076
Reviewed-by: Zoltán Böszörményi <zboszor@gmail.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
This is the only place where we don't check whether
primaryBus.id.plat->pdev is not NULL before accessing its members.
It may be NULL in cases we fail to parse the busid in
config_udev_odev_setup_attribs() (see also [1], [2]) such as when udev
does not give use ID_PATH. This in turn leads to
platform_find_pci_info() being not called and pdev being NULL in one of
the items within the xf86_platform_devices array. For this to cause a
crash we only need it to become the primaryBus device.
[1]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/993
[2]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1076
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
screenp->displays[count] (passed to configDisplay() in
configScreen()) is NULL if there is no Virtual setting
in the configuration.
Fixes: f8a6be04d0 ("xfree86: Change
displays array to pointers array to fix invalid pointer issues
after table reallocation")
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>
Physical dimmension of display can be obtained not just by configuration or
DDC, but also directly from kernel via drmModeGetConnector(). Until now
xserver silently discarded these values even when no configuration nor EDID
were present and fallbacked to default DPI.
There are rare cases when xf86SetDepthBpp is resizing displays array in confScreen.
As that array is shared between set of ScrnInfoRec's then realloc might invalidate chached DispPtr display values in
otheres ScrnInfoRec objects.
If we will change displays array as an array of pointers to DispRec then cached DispRec pointers in ScrnInfoRec
won't be invalid after reallocation of displays array.
Signed-off-by: Łukasz Spintzyk <lukasz.spintzyk@synaptics.com>
On FreeBSD 13.0-CURRENT for PowerPC64 big-endian (BE), X was
crashing in some cases. For instance, when twm was started
and the background was clicked to open its menu, X crashed
with a segmentation fault, trying to dereference a null pointer
at CreatePicture().
There were 2 issues with xorg-server handling of RGB masks that
caused the pointer above to be null and thus the crash:
- wrong use of ffs() to get the RGB offsets from the masks
- overflow when shifting a 16-bit integer
This change fixes both issues. They happen when the system is BE
but has a video adapter using a little-endian (LE) ARGB32
framebuffer. In order to display the correct colors, this setup
requires a BE RGBA32 color format to be used by X, by setting
the RGB masks appropriately, that didn't work properly because of
the issues above.
The code path added by commit 69e4b8e6 (xfree86: attempt to autoconfig
gpu slave devices (v3)) assumes that it will only be run if the primary
device on the screen is the first device in xf86configptr->conf_device_lst.
While this is true most of the time, there are two specific cases where
this assumption fails.
First, if the first device in conf_device_lst is assigned to a different
seat than the running X server, it will be skipped by the previous
FIND_SUITABLE macro usage. Second, if the primary device was explicitly
assigned to the screen but auto_gpu_device is still set and no secondary
devices were explicitly listed, that device may not be the first device
in conf_device_lst.
When the first device in conf_device_lst is not the primary device
assigned to the screen, two problems emerge. First, the first device in
conf_device_lst will never be assigned to the screen as a secondary
device. Second, the primary device is additionally assigned to the
screen as a secondary device. The combination of these problems causes
certain otherwise valid configurations to be invalid. For example, if a
primary device is assigned to a screen and a secondary device is listed
in the config but not explicitly assigned to the screen, then one order
of the device sections results in a usable PRIME or Reverse PRIME setup
and the other order does not.
This commit removes the assumption that the primary device is the first
device in conf_device_lst by starting the loop from the start of
conf_device_lst and skipping the primary device when it is encountered.
Signed-off-by: Jacob Cherry <jcherry@nvidia.com>
This add a new flag POINTER_RAWONLY for GetPointerEvents() which does
pretty much the opposite of POINTER_NORAW.
Basically, this tells GetPointerEvents() that we only want the
DeviceChanged events and any raw events for this motion but no actual
motion events.
This is preliminary work for Xwayland to be able to use relative motion
events for raw events. Xwayland would use absolute events for raw
events, but some X11 clients (wrongly) assume raw events to be always
relative.
To allow such clients to work with Xwayland, it needs to switch to
relative raw events (if those are available from the Wayland
compositor).
However, Xwayland cannot use relative motion events for actual pointer
location because that would cause a drift over time, the pointer being
actually controlled by the Wayland compositor.
So Xwayland needs to be able to send only relative raw events, hence
this API.
Bump the ABI_XINPUT_VERSION minor version to reflect that API addition.
v2: Actually avoid sending motion events (Peter)
v3: Keep sending raw emulated events with RAWONLY (Peter)
Suggested-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Related: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1130
The definition relies on IOPortBase, which is only ever set in
hw/xfree86/os-support/bsd/arm_video.c
This caused build failures on linux/mips with GCC 10, due to this
change (from https://gcc.gnu.org/gcc-10/changes.html#c):
"GCC now defaults to -fno-common. As a result, global variable accesses
are more efficient on various targets. In C, global variables with
multiple tentative definitions now result in linker errors. With
-fcommon such definitions are silently merged during linking."
As a result anything including compiler.h would get its own definition
of IOPortBase and the linker would error out.
Commit 6a5a4e6037 removed the option to
configure useSIGIO option. Indeed, the xfree86 SIGIO support was
reworked to use internal versions of OsBlockSIGIO and OsReleaseSIGIO.
As a result, useSIGIO is no longer needed and can dropped
Fixes: 6a5a4e60 - Remove SIGIO support for input [v5]
Closes: xorg/xserver#1107
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Prabhu Sundararaj <prabhu.sundararaj@nxp.com>
Signed-off-by: Mylène Josserand <mylene.josserand@free-electrons.com>
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
With !155, the device bus ID received via udev is constructed
properly with the "usb:" prefix. But, it is not enough to
make the following line to work in Section "Device":
BusID "usb:0:1.2:1.0"
Introduce BUS_USB, so the prefix can be distinguished from BUS_PCI
and check the supplied BusID value against device->attribs->busid
in xf86PlatformDeviceCheckBusID().
Signed-off-by: Böszörményi Zoltán <zboszor@pr.hu>
This is useful for mock input drivers that control the server in
integration tests. Given that input submission happens on a different
thread than processing, it's otherwise impossible for the driver to
synchronize with the completion of the processing of submitted events.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Most (but not all) of these were found by using
codespell --builtin clear,rare,usage,informal,code,names
but not everything reported by that was fixed.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
This option was implemented before the drivers were split in ≈2006,
and e.g. XWin still supports it.
With this commit, Xorg regains support, so that the following configuration can
be used to set the repeat rate for all keyboard devices without having to modify
Xorg command-line flags or having to automate xset(1):
Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "de"
Option "XkbVariant" "neo"
Option "AutoRepeat" "250 30"
EndSection
Signed-off-by: Michael Stapelberg <stapelberg@google.com>
xf86platformProbeDev didn't check the device path, fix it.
This is a problem when trying to set up a non-PCI device via
explicit xorg.conf.d configuration.
An USB DisplayLink device, being non-PCI was always set up
as a GPU device assigned to screen 0 instead of a regular
framebuffer, potentially having its own dedicated screen,
despite such configuration as below. Only the relevant parts
of the configuration are quoted, it's part of a larger context
with an Intel chip that has 3 outputs:
* DP1 connected to an LCD panel,
* VGA1 connected to an external monitor,
* HDMI1 unconnected and having no user visible connector
Section "ServerFlags"
Option "AutoBindGPU" "false"
EndSection
...
Section "Device"
Identifier "Intel2"
Driver "intel"
BusID "PCI:0:2:0"
Screen 2
Option "Monitor-HDMI1" "HDMI1"
Option "ZaphodHeads" "HDMI1"
EndSection
Section "Device"
Identifier "UDL"
Driver "modesetting"
Option "kmsdev" "/dev/dri/card0"
#BusID "usb:0:1.2:1.0"
Option "Monitor-DVI-I-1" "DVI-I-1"
Option "ShadowFB" "on"
Option "DoubleShadow" "on"
EndSection
...
Section "Screen"
Identifier "SCREEN2"
Option "AutoServerLayout" "on"
Device "UDL"
GPUDevice "Intel2"
Monitor "Monitor-DVI-I-1"
SubSection "Display"
Modes "1024x768"
Depth 24
EndSubSection
EndSection
Section "ServerLayout"
Identifier "LAYOUT"
Option "AutoServerLayout" "on"
Screen 0 "SCREEN"
Screen 1 "SCREEN1" RightOf "SCREEN"
Screen 2 "SCREEN2" RightOf "SCREEN1"
EndSection
On the particular machine I was trying to set up an UDL device,
I found the following structure was being used to match
the device to a platform device while I was debugging the issue:
xf86_platform_devices[0] == Intel, /dev/dri/card1, primary platform device
xf86_platform_devices[1] == UDL, /dev/dri/card0
devList[0] == "Intel0", ZaphodHeads: DP1
devList[1] == "Intel1", ZaphodHeads: VGA1
devList[2] == "UDL"
devList[3] == "Intel2", ZaphodHeads: HDMI1 (intended GPU device to UDL)
When xf86platformProbeDev() matched the UDL device, the BusID
check failed in both cases of:
* BusID "usb:0:1.2:1.0" was specified
* Option "kmsdev" "/dev/dri/card0" was specified
As a result, xf86platformProbeDev() went on to call probeSingleDevice()
with xf86_platform_devices[0] and devList[2], resulting in the
UDL device being set up as a GPU device assigned to the first screen
instead of as a framebuffer on the third screen as the configuration
specified.
Checking Option "kmsdev" in code code may be a layering violation.
But the modesetting driver is actually part of the Xorg sources
instead of being an external driver, so he "kmsdev" path knowledge
may be used here.
Signed-off-by: Böszörményi Zoltán <zboszor@pr.hu>
Since commit d8ec33fe05, an include on
glxvndabi.h has been added to hw/xfree86/common/xf86Init.c
However, if glx is disabled through --disable-glx and GLX headers are
not installed in the build's environment, build fails on:
In file included from xf86Init.c:81:
../../../include/glxvndabi.h:64:10: fatal error: GL/glxproto.h: No such file or directory
64 | #include <GL/glxproto.h>
| ^~~~~~~~~~~~~~~
Fix this failure by removing this include which does not seem to be
needed (an other option would have been to keep it under an ifdef GLXEXT
block)
Fixes:
- http://autobuild.buildroot.org/results/de838a843f97673d1381a55fd4e9b07164693913
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
New now ask Glamor to use EGL_MESA_query_driver to obtain the DRI driver
name; if successful, we use that as the DRI driver name. Following the
existing dri2.c logic, we also use the same name for the VDPAU driver,
except for i965 (and now iris), where we switch to the "va_gl" fallback.
This allows us to bypass the PCI ID lists in xserver and centralize the
driver selection mechanism inside Mesa. The hope is that we no longer
have to update these lists for any future hardware.
During startup, the xfree86 DDX's InitOutput() calls PreInit for
protocol screens first, and then GPU screens. On teardown, dix_main()
calls CloseScreen in the reverse order: GPU screens first starting with
the last one and then working backwards, and then protocol screens also
in reverse order.
InitOutput() calls ScreenInit in the wrong order: for GPU screens first and then
for protocol screens. This causes a problem for drivers that have global state
that is tied to the first screen that calls ScreenInit.
Fix this by simply re-ordering the for loops to call PreInit for
protocol screens first and then for GPU screens second.
Slightly simplifies the callers since they don't need to check for
non-NULL anymore.
I do extremely hate the workarounds here to suppress misprite taking the
cursor down though. Surely there's a better way.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This is a modified version of a patch we've been carry-ing in Fedora and
RHEL for years now. This patch automatically adds secondary GPUs to the
master as output sink / offload source making e.g. the use of
slave-outputs just work, with requiring the user to manually run
"xrandr --setprovideroutputsource" before he can hookup an external
monitor to his hybrid graphics laptop.
There is one problem with this patch, which is why it was not upstreamed
before. What to do when a secondary GPU gets detected really is a policy
decission (e.g. one may want to autobind PCI GPUs but not USB ones) and
as such should be under control of the Desktop Environment.
Unconditionally adding autobinding support to the xserver will result
in races between the DE dealing with the hotplug of a secondary GPU
and the server itself dealing with it.
However we've waited for years for any Desktop Environments to actually
start doing some sort of autoconfiguration of secondary GPUs and there
is still not a single DE dealing with this, so I believe that it is
time to upstream this now.
To avoid potential future problems if any DEs get support for doing
secondary GPU configuration themselves, the new autobind functionality
is made optional. Since no DEs currently support doing this themselves it
is enabled by default. When DEs grow support for doing this themselves
they can disable the servers autobinding through the servers cmdline or a
xorg.conf snippet.
Signed-off-by: Dave Airlie <airlied@gmail.com>
[hdegoede@redhat.com: Make configurable, fix with nvidia, submit upstream]
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
---
Changes in v2:
-Make the default enabled instead of installing a xorg.conf
snippet which enables it unconditionally
Changes in v3:
-Handle GPUScreen autoconfig in randr/rrprovider.c, looking at
rrScrPriv->provider, rather then in hw/xfree86/modes/xf86Crtc.c
looking at xf86CrtcConfig->provider. This fixes the autoconfig not
working with the nvidia binary driver
"bool" conflicts with C++ (meh) and stdbool.h (ngh alright fine). This
is a driver-visible change and will likely break the build for mach64,
but it can be fixed by simply using xf86ReturnOptValBool like every
other driver.
Signed-off-by: Adam Jackson <ajax@redhat.com>
<sys/io.h> on ARM hasn't worked for a long, long time, so it was removed
it from glibc upstream.
Remove the include to avoid a compilation failure on ARM with glibc.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Closes: https://gitlab.freedesktop.org/xorg/xserver/issues/840
Promote the generated file containing the date & time build was
configured to top-level.
Rename it from xf86Build.h to buildDateTIme.h.
Use it as well in XQuartz, stringize BUILD_DATE when needed.
If SYSTEMD_LOGIND is not defined, systemd_logind_take_fd is defined as a
macro evaluating to -1 by systemd-logind.h, leaving paused
uninitialized.
../hw/xfree86/common/xf86Xinput.c: In function ‘xf86NewInputDevice’:
../hw/xfree86/common/xf86Xinput.c:919:16: warning: ‘paused’ may be used uninitialized in this function [-Wmaybe-uninitialized]
../hw/xfree86/common/xf86Xinput.c:877:10: note: ‘paused’ was declared here
Drivers may need to loop over the allocated screens during PreInit, for example
to consolidate xorg.conf options that apply to a GPU device as a whole.
Currently, this works for protocol screens becuase x86Screens is exported, but
does not work for GPU screens.
Export xf86GPUScreens and xf86NumGPUScreens for consistency with xf86Screens and
xf86NumScreens.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Some Broadcom set-top-box boards have PCI busses, but the GPU is still
probed through DT. We would dereference a null busid here in that
case.
Signed-off-by: Eric Anholt <eric@anholt.net>
Lifted from vfb. xfree86 had almost the same thing but unparameterized,
port it to the vfb style.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Could cause privilege elevation and/or arbitrary files overwrite, when
the X server is running with elevated privileges (ie when Xorg is
installed with the setuid bit set and started by a non-root user).
CVE-2018-14665
Issue reported by Narendra Shinde and Red Hat.
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This hasn't done anything besides return TRUE in a long long time.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
These are so close to identical that most DDXes implement one in terms
of the other. All the relevant cases can be distinguished by the error
code, so merge the functions together to make things simpler.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
No supported driver supports 1bpp anymore, nor has in a very long time.
This option only worked with vgahw anyway.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I don't think this is useful information to have in the log, and it's
a bunch of autotools and meson logic to produce it.
Signed-off-by: Eric Anholt <eric@anholt.net>
60ec8ead broke the autotools build:
sdksyms.o:(.data+0x58): undefined reference to `InitConnectionLimits'
sdksyms.o:(.data+0x2ec8): undefined reference to `xf86ServerName'
collect2: error: ld returned 1 exit status
Makefile:811: recipe for target 'Xorg' failed
Likewise 3a4d7c79 for InitConnectionLimits.
Signed-off-by: Adam Jackson <ajax@redhat.com>
If it's really this important we should just do it and not complain. We
never do it so it must not matter.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I'm sure printing the address of function pointers in modules you'd
loaded might have made sense back when we rolled our own dlopen, but we
got better.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The old code would not in fact validate the option value, though it
might complain about it in the log. It also didn't let you set some
legal values that the -maxclients command line option would.
Signed-off-by: Adam Jackson <ajax@redhat.com>
DGAShutdown() walks every screen and attempts to reset the mode. That's
maybe a reasonable thing to do, although the explicit loop is certainly
a bad smell.
In ddxGiveUp it's called after we've torn down the vga arbiter - and in
fact most of the rest of screen state - which is... very very bad. The
other place it's called is from the Control-Alt-BackSpace handler, where
we don't even attempt to do vga arb setup, and where in any case we're
going to escape the main loop eventually anyway.
Move all that cleanup work inside DGACloseScreen. This means it happens
earlier in server teardown than previously, but not in a way you're ever
going to be upset about.
Signed-off-by: Adam Jackson <ajax@redhat.com>
This makes us match the featureset of autotools, and also fixes the
non-Linux default value to match.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
We already have pm_noop.c being built most of the time for the
no-OS-PM case, so just switch to always using it.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
This lets an application open a suitable DRM device and pass the file
descriptor to the mode setting driver through an X server command line
option, '-masterfd'.
There's a companion application, xlease, which creates a DRM master by
leasing an output from another X server. That is available at
git clone git://people.freedesktop.org/~keithp/xlease
v2:
Always print usage, but note that it can't be used if
setuid/gid
Suggested-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
In commit 9db2af6f75 (xfree86: Remove xf86{Map,Unmap}VidMem) we
somehow stopped exporting xf86{Read,Write}Mmio{8,16,32}. Since the
function pointer indirection was intended to support dense vs sparse and
sparse support is now gone, we can just make the functions static inline
in compiler.h and avoid all of this.
Bugzilla: https://bugs.gentoo.org/548906
Tested-by: Christopher May-Townsend <chris@maytownsend.co.uk>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Matt Turner <mattst88@gmail.com>
The newline before the protocl version got lost in commit
6cbefc3e0a. Prior to that commit, the
release date printed a newline at the end:
X.Org X Server 1.19.6
Release Date: 2017-12-20
X Protocol Version 11, Revision 0
Build Operating System: Linux 4.14.12-1-ARCH x86_64
Now, that string gets run together with the version:
X.Org X Server 1.19.99.903 (1.20.0 RC 3)X Protocol Version 11, Revision 0
Build Operating System: Linux
Since the version string printing has a variety of #ifdefs in it, just
add the newline to the begining of the protocol version string.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
[... but leave it defined and exported, since we're ABI-frozen - ajax]
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Tested-by: Ben Crocker <bcrocker@redhat.com>
restore abi
Having different types of code all trying to check for elevated privileges
is a bad idea. This implementation is the most thorough one.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Tested-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Implement function added in DRI3 v1.1.
A newest version of libepoxy (>= 1.4.4) is required as earlier
versions use a problematic version of Khronos
EXT_image_dma_buf_import_modifiers spec.
v4: Only send scanout-supported modifiers if flipping is possible
v5: Fix memory corruption in XWayland (uninitialized pointer)
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The big change here is MakeCurrent and context tag tracking. We now
delegate context tags entirely to the vnd layer, and simply store a
pointer to the context state as the tag data. If a context is deleted
while it's current, we allocate a fake ID for the context and move the
context state there, so the tag data still points to a real context. As
a result we can stop trying so hard to detach the client from contexts
at disconnect time and just let resource destruction handle it.
Since vnd handles all the MakeCurrent protocol now, our request handlers
for it can just be return BadImplementation. We also remove a bunch of
LEGAL_NEW_RESOURCE, because now by the time we're called vnd has already
allocated its tracking resource on that XID.
v2: Update to match v2 of the vnd import, and remove more redundant work
like request length checks.
v3: Add/remove the XID map from the vendor private thunk, not the
backend. (Kyle Brenneman)
v4: Fix deletion of ghost contexts (Kyle Brenneman)
Signed-off-by: Adam Jackson <ajax@redhat.com>
DoConfigure() attempts to call the PreInit handler on a device without
checking that the handler exists.
Check that the PreInit handler exists for a device before attempting to
call it.
Signed-off-by: Jeff Smith <whydoubt@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
When the dev2screen is sized to xf86NumDrivers in DoConfigure(),
subsequent code may attempt to write past the end of the array.
Size the dev2screen array to nDevToConfig instead.
Signed-off-by: Jeff Smith <whydoubt@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Commits b5dffbb and d75ffcd introduce code in xf86platformProbe() that
references a member of xf86configptr. However, when using the
"-configure" option, xf86configptr may not be initialized when
xf86platformProbe() is called.
Avoid referencing a member of xf86configptr if uninitialized.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100405
Signed-off-by: Jeff Smith <whydoubt@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
xf86pciBus.c:1464:21: warning: comparison of constant 256 with expression of type 'uint8_t' (aka 'unsigned char') is always true [-Wtautological-constant-out-of-range-compare]
if (pVideo->bus < 256)
The code used to be in xf86FormatPciBusNumber and compared parameter which was int, but since b967bf2a it was inlined now it works with uint8_t.
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
v4: Check whether the RANDR private key is initialized before trying to
use it to look up the screen private.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Tsk. This broke vesa for me, the rrGetScrPriv in InitOutput will crash
if randr's screen private key hasn't been initialized yet. That seems
dumb, but let's not leave it broken.
This reverts commit c08d7c1cdd.
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Fixes double-free later in xf86XvMCCloseScreen, which would generally
cause fireworks.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
This no longer does anything useful.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
The only consumer of this is the Linux vm86 backend for int10 (which you
should not use), and there all it serves to do is make signals generated
by the vm86 task non-fatal. In practice this error appears never to
happen, and marching ahead with root privileges after arbitrary code has
raised a signal seems like a poor plan.
Remove the usage in the vm86 code, making this error fatal.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
This was added in ~2004 for the sis driver, to detect whether it could
use SSE for memcpy. Charmingly, the code to check whether that feature
exists in the server is:
#if XORG_VERSION_CURRENT >= XORG_VERSION_NUMERIC(6,8,99,13,0)
#define SISCHECKOSSSE /* Automatic check OS for SSE; requires SigIll facility */
#endif
Which means it has never worked in any modular server release.
A less gross way to do this is to check for SSE support with getauxval()
or /proc/cpuinfo or similar. Since no driver is using the existing
intercept mechanism, drop it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Roundhouse kick replacing the various (sizeof(foo)/sizeof(foo[0])) with
the ARRAY_SIZE macro from dix.h when possible. A semantic patch for
coccinelle has been used first. Additionally, a few macros have been
inlined as they had only one or two users.
Signed-off-by: Daniel Martin <consume.noise@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
By having it as a custom_target with build_always, every "ninja -C
build" would rebuild Xorg for the new date/time, even if the rest of
Xorg didn't change.
We could build the rest of Xorg into a static lib, and regenerate
date/time when the static lib changes and link that into a final Xorg,
but BUILD_DATE/TIME is such a dubious feature (compared to including a
git sha, which is easy with meson) it doesn't seem worth the build
time cost.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
There were two bugs here: The comparison function was not stable when
one or more of the drivers being compared is a fallback, and the last
driver in the list would never be moved.
Signed-off-by: Adam Jackson <ajax@redhat.com>
xf86str.h is parsed into sdksyms unconditionally but the symbol is only
defined when building with PCI support. Move the decl to a header that
sdksyms only parses when building PCI support.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Jon Turney <jon.turney@dronecode.org.uk>
This symbol is used by some DRI2+ drivers and there's nothing
DRI1-specific about it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Acked-by: Keith Packard <keithp@keithp.com>
It was attempting to use the <bus>@<domain> format accepted by the BusID
stanza, but the two values were swapped.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
The PCI domain has to be specified like this:
"PCI:<bus>@<domain>:<device>:<function>"
Example before:
(--) PCI:*(0:0:1:0) 1002:130f:1043:85cb [...]
(--) PCI: (0:1:0:0) 1002:6939:1458:229d [...]
after:
(--) PCI:*(0@0:1:0) 1002:130f:1043:85cb [...]
(--) PCI: (1@0:0:0) 1002:6939:1458:229d [...]
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
./hw/xfree86/common/xf86pciBus.c: In function ‘xf86MatchDriverFromFiles’:
../hw/xfree86/common/xf86pciBus.c:1330:52: warning: ‘snprintf’ output may be
truncated before the last format character [-Wformat-truncation=]
snprintf(path_name, sizeof(path_name), "%s/%s", ^~~~~~~
../hw/xfree86/common/xf86pciBus.c:1330:13: note: ‘snprintf’ output between 2
dirent->d_name is 256, so sprintf("%s/%s") into a 256 buffer gives us:
and 257 bytes into a destination of size 256
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
gcc -std=c99 does not define the former, and it's a horrible namespace
confusion anyway.
Signed-off-by: Julien Cristau <jcristau@debian.org>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Tested-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
Implementation of new drivers matching algorithm. New approach
doesn't add duplicate drivers and ease drivers matching phase.
v2: Re-commit the patch reverted in
2388f5e583, with Aaron Plattner's
fix squashed in (by anholt).
Signed-off-by: Karol Kosik <kkosik@nvidia.com>
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com> (v1)
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com> (v1)
Tested-by: Peter Hutterer <peter.hutterer@who-t.net>
Tested-by: Eric Anholt <eric@anholt.net>
This reverts commit 112d0d7d01.
It broke Xorg for Adam, Peter, and myself, by failing hard when a
module load failed.
Signed-off-by: Eric Anholt <eric@anholt.net>
glibc would like to stop declaring major()/minor() macros in
<sys/types.h> because that header gets included absolutely everywhere
and unix device major/minor is perhaps usually not what's expected. Fair
enough. If one includes <sys/sysmacros.h> as well then glibc knows we
meant it and doesn't warn, so do that if it exists.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Implementation of new drivers matching algorithm. New approach
doesn't add duplicate drivers and ease drivers matching phase.
Signed-off-by: Karol Kosik <kkosik@nvidia.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
This is a work in progress that builds Xvfb, Xephyr, Xwayland, Xnest,
and Xdmx so far. The outline of Xquartz/Xwin support is in tree, but
hasn't been built yet. The unit tests are also not done.
The intent is to build this as a complete replacement for the
autotools system, then eventually replace autotools. meson is faster
to generate the build, faster to run the bulid, shorter to write the
build files in, and less error-prone than autotools.
v2: Fix indentation nits, move version declaration to project(), use
existing meson_options for version-config.h's vendor name/web.
Signed-off-by: Eric Anholt <eric@anholt.net>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
We mostly use #ifdef throughout the tree, and this lets the generated
config.h files just be #define TOKEN instead of #define TOKEN 1.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Without this, assertion failures can make life hard for users and those
trying to help them.
v2:
* Change commit log wording slightly to "can make life hard", since
apparently e.g. logind can alleviate that somewhat.
* Set default handler for SIGABRT in
hw/xfree86/common/xf86Init.c:InstallSignalHandlers() and
hw/xquartz/quartz.c:QuartzInitOutput() (Eric Anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
parser/scan.c was checking for #ifdef XCONFIGFILE and XCONFIGDIR and
defaulting to "xorg.conf", and "xorg.conf.d", so if you had changed
__XCONFIGFILE__ to anything else, it would have got out of sync.
Settle on the name without gratuitous underscores.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
No driver is using these, as far as I know.
v2: Tripwire the entity hook arguments to xf86Config*Entity, fix
documentation (Eric Anholt)
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Most of this is a legacy of the old "extmod" design where you could load
_some_ extensions dynamically but only if the server had been built with
support for them in the first place.
Note that since we now only initialize the DPMS extension if at least
one screen supports it, we no longer need DPMSCapableFlag: if it would
be false, we would never read its value.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Following on from the previous change, this adds a DPMS hook to the
ScreenRec and uses that to infer DPMS support. As a result we can drop
the dpms stub code from Xext.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Rather than setting up a per-screen private, just conditionally
initialize ScrnInfoRec::DPMSSet based on the config options, and inspect
that to determine whether DPMS is supported.
We also move the "turn the screen back on at CloseScreen" logic into the
DPMS extension's (new) reset hook. This would be a behavior change for
the non-xfree86 servers, if any of them had non-stub DPMS support.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
There's really no reason to pretend to support this, apps hate it, all
we're doing is giving people a way to injure themselves. It doesn't work
anyway with any Radeon, any NVIDIA chip, or any Intel chip since i810.
Rip out all the logic for handling 24bpp pixmaps and framebuffers, and
silently ignore the old options that would ask for it.
The cirrus alpine driver has been updated to default to 16bpp, and both
it and the i810 driver can now use the 32->24 conversion code in shadow
if they want. All other drivers support 32bpp. Configurations that
explicitly request 24bpp in order to fit in VRAM will be broken now
though.
v2: Fix command line options to silently ignore 24bpp rather than fail
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This touches everything that ends up in the Xorg binary; the big missing
part is GLX since that's all generated code. Cuts about 14k from the
binary on amd64.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
First, move them to the end of the struct, for marginally better cache
locality for the struct members that actually have meaning; move the
existing slots at the end of the struct up near some others with similar
meanings. Second, only keep four slots each of integer, data pointer,
and function pointer; we've rarely used this escape hatch so this is
still plenty.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Never set by the core, not used in any modern driver.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Just no.
The ddxDesign chunk removes the whole para about xf86FixPciResource,
since it turns out that function doesn't exist at all anymore.
The only drivers that reference this at all are i128 and mga, and even
then only in the non-pciaccess path.
v2:
- Update commentary about i128/mga
- Don't remove the BiosBase keyword from the config parser since that
would turn a no-op into a fatal error (Aaron Plattner)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Seriously not worth the effort of tracking this, especially now that
competent drivers don't have a limit. The sis driver does inspect this
member, but hilariously does so only so it can print the same information
as the core does.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Only mach64 and rendition actually use this feature. Everyone else just
checks it in their ValidMode hook, they can too.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
We don't actually need (or intend) to keep this struct the same across
revisions.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Nobody was ever calling this with a non-null argument for subdir list or
pattern list. Having done this, InitSubdirs is only ever called with a
NULL argument, so it's really just a complicated way of duplicating the
default list; we can remove that and just walk the list directly.
The minor error code was only ever used to distinguish among two cases
of LDR_BADUSAGE. Whatever.
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Callers only ever use this for a single directory anyway.
While we're at it, also move xf86DriverListFromCompile near its only
user in the X -configure code (and inline it out of existence), and
remove LoaderFreeDirList as it's unused (since X -configure is just
going to exit anyway, none of that code cares about cleanup).
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
There's no reason a driver should ever care about this.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
indent(1) gets confused by function-like macros with no trailing
semicolon, which is fair enough really.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The idea here is that the driver might have once been old enough to not
have the driverFunc slot in DriverRec, with the module ABI not having
changed when it was added. That was ages ago, and drivers always declare
themselves with DriverRec not DriverRec1, so uninitialized slots will
simply be zero.
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Everybody using this functionality specifies a major version, which
makes sense. If you don't care about a minor version, that's equivalent
to saying you require minor >= 0, so just say so; likewise patch level.
Likewise ABI class is always specified.
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The enum has been unused since at least the removal of elfloader.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This looks like more, but only if you don't compare it to the number
pulled in by misc.h.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This API is dumb. uname(3) exists, feel free to use it, but ideally
write to the interface not to the OS. There are a couple of drivers
using this API, they could all reasonably just not.
This also removes the OS name from the loader subdirectory path search.
Having /usr/lib/xorg shared across OSes is a non-goal here.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Similar to its little brother - LoadSubModule. Currently all call sites
provide NULL anyway ;-)
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Allow OutputClass config snippets to modify the module-path.
Note that any specified ModulePaths will be pre-pended to the normal
ModulePath. The idea behind this is that any output hardware specific
modules should have preference over the normal modules.
One use-case for this is the nvidia binary driver, this allows a
config snippet like this:
Section "OutputClass"
MatchDriver "nvidia"
Modulepath "/usr/lib64/nvidia/modules"
EndSection
To get the nvidia glx specific glx module loaded, but only when the
nvidia kernel driver is loaded.
Together with the glvnd work done recently, this allows the nouveau
+ mesa and nvidia-binary userspace stacks to co-exist on the same
system without any ldconfig / xorg.conf tweaking and the xserver will
automatically do the right thing depending on which kernel driver
(nouveau or nvidia) is loaded.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Allow using:
Option "PrimaryGPU" "yes"
In an OutputClass section to override the default primary GPU device
selection which selects the GPU used as output by the firmware.
If multiple output devices match an OutputClass section with
the PrimaryGPU option set, the first one enumerated becomes the
primary GPU.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This is a preparation patch for allowing an OutputClass section to
override the default primary GPU device selection.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Add support for setting options in OutputClass Sections and having these
applied to any matching output devices.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Make OutputClassMatches directly take a xf86_platform_device as argument,
rather then an index into xf86_platform_devices. This makes things
easier for callers which already have a xf86_platform_device pointer.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
xf86MatchDevice returns a dynamically allocated list of GDevPtr-s,
free this when we're done with it.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
In InitOutput, if xf86HandleConfigFile returns CONFIG_NOFILE
(which it does if no config file or directory is present), the
autoconfig flag is set, causing xf86AutoConfig to be called
later on.
xf86AutoConfig calls xf86OutputClassDriverList via the
call tree:
xf86AutoConfig =>
listPossibleVideoDrivers =>
xf86PlatformMatchDriver =>
xf86OutputClassDriverList
and xf86OutputClassDriverList attempts to traverse a linked list
that is a member of the XF86ConfigRec struct pointed to by the
global xf86configptr, which is NULL at this point because the
XF86ConfigRec struct is only allocated (by xf86readConfigFile)
AFTER the config file and directory have been successfully
opened; the CONFIG_NOFILE return from xf86HandleConfigFile
occurs BEFORE the call to xf86readConfigFile which allocates
the XF86ConfigRec struct.
Rx: In read.c (for symmetry with xf86freeConfig, which already
appears in this file), add a new function xf86allocateConfig
which tests the value of xf86configptr and, if it's NULL,
allocates the XF86ConfigRec struct and deposits the pointer
in xf86configptr. In xf86Parser.h, add a prototype for the
new xf86allocateConfig function.
Back in read.c, #include "xf86Config.h". In xf86readConfigFile,
change the open-code call to calloc to a call to the new
xf86allocateConfig function.
In xf86AutoConfig.c, add a call to the new xf86allocateConfig function
to the beginning of xf86AutoConfig to make sure the XF86ConfigRec struct
is allocated.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Ben Crocker <bcrocker@redhat.com>
If we did not find any non GPU Screens, try again ignoring the notion
of any video devices being the primary device. This fixes Xorg exiting
with a "no screens found" error when using virtio-vga in a
virtual-machine and when using a device driven by simpledrm.
This is a somewhat ugly solution, but it is the best I can come up with
without major surgery to the bus and probe code.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This is primarily a preparation patch for fixing the xserver exiting with
a "no screens found" error even though there are supported video cards,
due to the server not recognizing any card as the primary card.
This also fixes the (mostly theoretical) case of a platformBus capable
driver adding a device as GPUscreen before a driver which only supports
the old PCI probe method gets a chance to claim it as a normal screen.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
If foundScreen is TRUE, then all the code below the removed if
will not execute until we reach the return foundScreen; at the
end, so this entire if block is redundant.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Special case for the systemd-logind case in xfree86: when we're vt-switched
away and a device is plugged in, we get a paused fd from logind. Since we
can't probe the device or do anything with it, we store that device in the
xfree86 and handle it later when we vt-switch back. The device is not added to
inputInfo.devices until that time.
When the device is removed while still vt-switched away, the the config system
never notifies the DDX. It only runs through inputInfo.devices and our device
was never added to that.
When a device is plugged in, removed, and plugged in again while vt-switched
away, we have two entries in the xfree86-specific list that refer to the same
device node, both pending for addition later. On VT switch back, the first one
(the already removed one) will be added successfully, the second one (the
still plugged-in one) fails. Since the fd is correct, the device works until
it is removed again. The removed devices' config_info (i.e. the syspath)
doesn't match the actual device we addded tough (the input number increases
with each plug), it doesn't get removed, the fd remains open and we lose track
of the fd count. Plugging the device in again leads to a dead device.
Fix this by adding a call to notify the DDX to purge any remainders of devices
with the given config_info, that's the only identifiable bit we have at this
point.
https://bugs.freedesktop.org/show_bug.cgi?id=97928
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
No functional changes but it makes it easier to remove elements from the
middle of the list (future patch).
We don't have an init call into this file, so the list is manually
initialized.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
They're identically laid-out structs but let's use the right type to search
for our desired value.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
The option is misleading and using it leads to disabling both direct and
accelerated indirect GLX. In such cases the xserver GLX attempts to
match DRISW (IGLX) configs with the DRI2/3 ones (direct GLX) leading to
all sorts of fun experience.
Remove the option until we get a clear split and control over direct vs
indirect GLX.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
This fix is for the following xorg.conf can work:
Section "ServerFlags"
Option "AutoAddGPU" "off"
EndSection
Section "Device"
Identifier "Amd"
Driver "ati"
BusID "PCI:1:0:0"
EndSection
Section "Device"
Identifier "Intel"
Driver "modesetting"
BusID "pci:0:2:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Intel"
GPUDevice "Amd"
EndSection
Without AutoAddGPU off, modesetting DDX will also be loaded
for GPUDevice.
Signed-off-by: Qiang Yu <Qiang.Yu@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
The new platform bus code and the old PCI bus code overlap. Platform bus
can handle any type of device, including PCI devices, whereas the PCI code
can only handle PCI devices. Some drivers only support the old style
PCI-probe methods, but the primary device detection code is server based,
not driver based; so we might end up with a primary device which only has
a PCI bus-capable driver, but was detected as primary by the platform
code, or the other way around.
(The above paragraph was shamelessly stolen from Hans de Goede, and
customized.)
The latter case applies to QEMU's virtio-gpu-pci device: it is detected as
a BUS_PCI primary device, but we actually probe it first (with the
modesetting driver) through xf86platformProbeDev(). The
xf86IsPrimaryPlatform() function doesn't recognize the device as primary
(it bails out as soon as it sees BUS_PCI); instead, we add the device as a
secondary graphics card under "autoAddGPU". In turn, the success of this
automatic probing-as-GPU prevents xf86CallDriverProbe() from proceeding to
the PCI probing.
The result is that the server exits with no primary devices detected.
Commit cf66471353 ("xfree86: use udev to provide device enumeration for
kms devices (v10)") added "cross-bus" matching to xf86IsPrimaryPci(). Port
that now to xf86IsPrimaryPlatform(), so that we can probe virtio-gpu-pci
as a primary card in platform bus code.
Cc: Adam Jackson <ajax@redhat.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Keith Packard <keithp@keithp.com>
Cc: Marcin Juszkiewicz <mjuszkiewicz@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-By: Marcin Juszkiewicz <mjuszkiewicz@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Xorg -configure relies on the bus implementation, e.g.
xf86pciBus.c to call xf86AddBusDeviceToConfigure(). The new
xf86platformBus code does not have support for this.
Almost all drivers support both the xf86platformBus and xf86pciBus
nowadays, and the generic xf86Bus xf86CallDriverProbe() function
prefers the new xf86platformBus probe method when available.
Since the platformBus paths do not call xf86AddBusDeviceToConfigure()
this results in Xorg -configure failing with the following error:
"No devices to configure. Configuration failed.".
Adding support for the xf86Configure code to xf86platformBus.c
is non trivial and since we advise users to normally run without
any Xorg.conf at all not worth the trouble.
However some users still want to use Xorg -configure to generate a
template config file, this commit implements a minimal fix to make
things work again for PCI devices by skipping the platform
probe method when xf86DoConfigure is set.
This has been tested on a system with integrated intel graphics,
with both the intel and modesetting drivers and restores Xorg -configure
functionality on both cases.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This loop was written in a buggy style, causing a NULL driver ptr to be
passed to copyScreen(). copyScreen() only uses that to generate an
identifier string, so this is mostly harmless on systems that accept
NULL for asprintf() "%s" format. (the generated identifiers are off
by one wrt the driver names and the last one contains NULL.
For systems that don't accept NULL for '%s' this would cause a
segmentation fault when this code is used (no xorg.conf, but partial
config in xorg.conf.d for instance).
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
Reviewed-by: Keith Packard <keithp@keithp.com>
In function xf86VGAarbiterScrnInit when the "pEnt->bus.type" is
BUS_PLATFORM, the "pScrn->vgaDev" won't be set, so the "pScrn->vgaDev" is
equal to zero.
The variable "rsrc_decodes" in function "xf86VGAarbiterAllowDRI" is not
initialized. So it will occur error when "pScrn->vgaDev == 0", and
"vga_count > 1". For this case, as "pScrn->vgaDev == 0", the function
"pci_device_vgaarb_get_info" will only set the value of "vga_count",
but won't set the value of "rsrc_decodes", so it will has two different
return values for function "xf86VGAarbiterAllowDRI" in different
platforms. One platform will return TRUE, as the "rsrc_decodes" 's
default value is 0, but another platform will return FALSE, as the
"rsrc_decodes" 's default value is "32767", this will cause disable
direct rendering.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96937
Signed-off-by: Emily Deng <Emily.Deng@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Any code called from the driver ScreenInit may want to refer to
pScrn->pScreen. As the function passed to AddScreen is the first place
the DDX sees a new screen, the generic code needs to make sure that
value is set before passing control to the video driver's
initialization code.
This was found by running a driver which didn't bother to set this
value when the initial colormap was installed; xf86RandR12LoadPalette
tried to use pScrn->pScreen and crashed.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97124
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-and-Tested-by: Michel Dänzer <michel.daenzer@amd.com>
This is a problem for the libinput driver that uses the same context across
multiple devices. The driver may be halfway through setting up an input device
(and the only way to do so is to add it to libinput) when the input thread
comes in an reads events. This then causes mayhem when data is dereferenced
that hasn't been set up yet.
In my case the cause was the call to libinput_path_remove_device() inside
preinit racing with evdev_dispatch_device() handling of ENODEV. The sequence
was:
- thread 2 gets an event and calls evdev_dispatch_device()
- thread 1 calls libinput_path_remove_device() which sets the device->source
to NULL
- thread 2 reads from the fd, gets ENODEV and now removes the device->source,
dereferencing the null-pointer
This is the one I could reproduce the most, but there are other potential
pitfalls that affect any driver that uses the same fd for multiple devices.
Avoid all this and wrap PreInit into the lock.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
If a device couldn't be enabled we left the lock hanging.
This patch also removes the leftover OsReleaseSignals() call, now unnecessary.
Note that input_unlock() is later than previously OsReleaseSignals().
RemoveDevice() manipulates the input device and its file descriptors, it's
safer to put the input_unlock() call after RemoveDevice() to avoid events
coming in while the device is being removed.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
Instead of breaking the former when the driver supports the latter,
hook them up so that the hardware LUTs reflect the combination of the
current colourmap and gamma states. I.e. combine the colourmap, the
global gamma value/ramp and the RandR 1.2 per-CRTC gamma ramps into one
combined LUT per CRTC.
Fixes e.g. gamma sliders not working in games.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=27222
v2:
* Initialize palette_size and palette struct members, fixes crash on
server startup.
v3:
* Free randrp->palette in xf86RandR12CloseScreen, fixes memory leak.
v4:
* Call CMapUnwrapScreen if xf86RandR12InitGamma fails (Emil Velikov).
* Still allow xf86HandleColormaps to be called with a NULL loadPalette
parameter in the xf86_crtc_supports_gamma case.
v5:
* Clean up inner loops in xf86RandR12CrtcComputeGamma (Keith Packard)
* Move palette update out of per-CRTC loop in xf86RandR12LoadPalette
(Keith Packard)
v6:
* Handle reallocarray failure in xf86RandR12LoadPalette (Keith Packard)
Reviewed-by: Keith Packard <keithp@keithp.com>
ATTR_KEY maps to ID_INPUT_KEY which is set for any device with keys.
ID_INPUT_KEYBOARD and thus ATTR_KEYBOARD is set for devices that are actual
keyboards (and have a set of expected keys).
Hand-written match rules may only apply ID_INPUT_KEYBOARD, so make sure we
match on that too.
Arguably we should've been matching on ATTR_KEYBOARD only all along but
changing that likely introduces regressions.
Reported-by: Marty Plummer <netz.kernel@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This removes the last uses of fd_set from the server interfaces
outside of the OS layer itself.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
With no users of the interface needing the readmask anymore, we can
remove it from the argument passed to these functions.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Remove code in xf86Wakeup for dealing with other input and switch to
using the new NotifyFd interface.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The intent here was that fallback drivers would be at the end of the
list in order, but if a fallback driver happened to be at the end of the
list already that's not what would happen. Rather than open-code
something smarter, just use qsort.
Note that qsort puts things in ascending order, so somewhat backwardsly
fallbacks are greater than native drivers, and vesa is greater than
modesetting.
v2: Use strcmp to compare non-fallback drivers so we get a predictable
result if your libc's qsort isn't stable (Keith Packard)
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
As documented in xorg.conf(5), a value of ConstantDeceleration between 0
and 1 will speed up the pointer. However, values less than 1 actually
had no effect. Fix this.
Note that this bug only affected "ConstantDeceleration" as configured
through xorg.conf, not "Device Accel Constant Deceleration" as configured
through xinput. The property handler AccelSetDecelProperty() also did
not need to be changed, as it did not limit the values of the property.
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=92766
Signed-off-by: Eric Biggers <ebiggers3@gmail.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
We want to notice that it's set, but still pass it through to dix.
Return 0 to indicate this.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
A new --with-fallback-input-driver=foo option allows selecting a
fallback driver for the server if the driver configured for the device
is not found. Note that this only applies when the device has a driver
assigned and that module fails to load, devices without a driver are
ignored as usual.
This avoids the situation where a configuration assigns e.g. the
synaptics driver but that driver is not available on the system,
resulting in a dead device. A fallback driver can at least provides some
functionality.
This becomes more important as we move towards making other driver true
leaf nodes that can be installed/uninstalled as requested. Specifically,
wacom and synaptics, a config that assigns either driver should be
viable even when the driver itself is not (yet) installed on the system.
It is up to the distributions to make sure that the fallback driver is
always installed. The fallback driver can be disabled with
--without-fallback-input-driver and is disabled by default on non-Linux
systems because we don't have generic drivers on those platforms.
Default driver on Linux is libinput, evdev is the only other serious
candidate here.
Sample log output:
[ 3274.421] (II) config/udev: Adding input device SynPS/2 Synaptics TouchPad (/dev/input/event4)
[ 3274.421] (**) SynPS/2 Synaptics TouchPad: Applying InputClass "touchpad weird driver"
[ 3274.421] (II) LoadModule: "banana"
[ 3274.422] (WW) Warning, couldn't open module banana
[ 3274.422] (II) UnloadModule: "banana"
[ 3274.422] (II) Unloading banana
[ 3274.422] (EE) Failed to load module "banana" (module does not exist, 0)
[ 3274.422] (EE) No input driver matching `banana'
[ 3274.422] (II) Falling back to input driver `libinput'
.. server proceeds to assign libinput, init the device, world peace and rainbows
everywhere, truly what a sight. Shame about the banana though.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Not visible in the patch, but the same stanza is repeated below inside
the #ifdef GLXEXT. There's no reason to bother with checking it if we
built without GLXEXT so remove the unconditional one.
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
As the man page for the latter states:
The effects of signal() in a multithreaded process are unspecified.
We already have an interface to call sigaction() instead, use it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Threaded input doesn't use SIGIO anymore, but existing drivers using
xf86BlockSIGIO and xf86ReleaseSIGIO probably want to lock the input
mutex during those operations. Provide inline functions to do this
which are marked as 'deprecated' so that drivers will get warnings
until they are changed.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Oops. This didn't get removed when xfree86 was converted over to use
the input thread.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
threaded input can affect drivers that use OsBlockSIGIO when dealing
with cursors.
Signed-off-by: Keith Packard <keithp@keithp.com>
Requested-by: Peter Hutterer <peter.hutterer@who-t.net>
Switch the XFree86 DDX over to threaded input
v2: Rewrite comment in xf86Helper about silken mouse
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
When this code was called from SIGIO, saving and restoring errno could
possibly have made sense in some strange environment. Now that this
will not be called from a signal handler, there is no reason to do that.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
This removes all of the SIGIO handling support used for input
throughout the X server, preparing the way for using threads for input
handling instead.
Places calling OsBlockSIGIO and OsReleaseSIGIO are marked with calls
to stub functions input_lock/input_unlock so that we don't lose this
information.
xfree86 SIGIO support is reworked to use internal versions of
OsBlockSIGIO and OsReleaseSIGIO.
v2: Don't change locking order (Peter Hutterer)
v3: Comment weird && FALSE in xf86Helper.c
Leave errno save/restore in xf86ReadInput
Squash with stub adding patch (Peter Hutterer)
v4: Leave UseSIGIO config parameter so that
existing config files don't break (Peter Hutterer)
v5: Split a couple of independent patch bits out
of kinput.c (Peter Hutterer)
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Not all display managers make it easy (or possible) to modify the
command line flags passed to the server, so add a way to get to it from
xorg.conf.
v2: Fix the FlagOptions list to not have IGLX after the terminator (Alan
Coopersmith)
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The destination variable is never freed, thus we even plug some memory
leaks.
v2: Rebase against updated xf86CheckPrivs() helper.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Current message was quite off "file specified must be a relative path"
and alike. Just factor it out and use "path/file" as needed.
v2: Rework error message, drop "Using default", print actual arg value.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
The tablet pads have been separate kernel devices for a while now and
libwacom has labelled them with the udev ID_INPUT_TABLET_PAD for over a year
now. Add a new MatchIsTabletPad directive to apply configuration options
specifically to the Pad part of a tablet.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
-Wlogical-op now tells us:
devices.c:1685:23: warning: logical ‘and’ of equal expressions
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
All consumers have been ported to the root window callback, so this can
all be nuked.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
There are no longer any loadable font modules (not that they ever did
much in the first place), so stop pretending they're a defined ABI
surface.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Julien Cristau <jcristau@debian.org>