public server module API headers shouldn't be clobbered with non-exported
definitions, so move them out to private header file.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1290>
This function is doing the same like LogMessageVerb(), so no need to keep
around a duplicate implementation. Leaving it as a macro, until all callers,
also in drivers, have been migrated.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1679>
This function is doing the same like LogMessageVerb(), so no need to keep
around a duplicate implementation. Leaving it as a macro, until all callers,
also in drivers, have been migrated.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1679>
No supported driver supports 1bpp anymore, nor has in a very long time.
This option only worked with vgahw anyway.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
v4: Check whether the RANDR private key is initialized before trying to
use it to look up the screen private.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Tsk. This broke vesa for me, the rrGetScrPriv in InitOutput will crash
if randr's screen private key hasn't been initialized yet. That seems
dumb, but let's not leave it broken.
This reverts commit c08d7c1cdd.
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
This no longer does anything useful.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
The only consumer of this is the Linux vm86 backend for int10 (which you
should not use), and there all it serves to do is make signals generated
by the vm86 task non-fatal. In practice this error appears never to
happen, and marching ahead with root privileges after arbitrary code has
raised a signal seems like a poor plan.
Remove the usage in the vm86 code, making this error fatal.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
This was added in ~2004 for the sis driver, to detect whether it could
use SSE for memcpy. Charmingly, the code to check whether that feature
exists in the server is:
#if XORG_VERSION_CURRENT >= XORG_VERSION_NUMERIC(6,8,99,13,0)
#define SISCHECKOSSSE /* Automatic check OS for SSE; requires SigIll facility */
#endif
Which means it has never worked in any modular server release.
A less gross way to do this is to check for SSE support with getauxval()
or /proc/cpuinfo or similar. Since no driver is using the existing
intercept mechanism, drop it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
We mostly use #ifdef throughout the tree, and this lets the generated
config.h files just be #define TOKEN instead of #define TOKEN 1.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
No driver is using these, as far as I know.
v2: Tripwire the entity hook arguments to xf86Config*Entity, fix
documentation (Eric Anholt)
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
There's really no reason to pretend to support this, apps hate it, all
we're doing is giving people a way to injure themselves. It doesn't work
anyway with any Radeon, any NVIDIA chip, or any Intel chip since i810.
Rip out all the logic for handling 24bpp pixmaps and framebuffers, and
silently ignore the old options that would ask for it.
The cirrus alpine driver has been updated to default to 16bpp, and both
it and the i810 driver can now use the 32->24 conversion code in shadow
if they want. All other drivers support 32bpp. Configurations that
explicitly request 24bpp in order to fit in VRAM will be broken now
though.
v2: Fix command line options to silently ignore 24bpp rather than fail
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This API is dumb. uname(3) exists, feel free to use it, but ideally
write to the interface not to the OS. There are a couple of drivers
using this API, they could all reasonably just not.
This also removes the OS name from the loader subdirectory path search.
Having /usr/lib/xorg shared across OSes is a non-goal here.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
If we did not find any non GPU Screens, try again ignoring the notion
of any video devices being the primary device. This fixes Xorg exiting
with a "no screens found" error when using virtio-vga in a
virtual-machine and when using a device driven by simpledrm.
This is a somewhat ugly solution, but it is the best I can come up with
without major surgery to the bus and probe code.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Threaded input doesn't use SIGIO anymore, but existing drivers using
xf86BlockSIGIO and xf86ReleaseSIGIO probably want to lock the input
mutex during those operations. Provide inline functions to do this
which are marked as 'deprecated' so that drivers will get warnings
until they are changed.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
All consumers have been ported to the root window callback, so this can
all be nuked.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The giant OBSOLETE DO NOT USE comment has been there since 2000,
probably it's safe to nuke by now.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Factor this code out into functions so that it can be re-used for the
systemd-logind device pause/resume paths.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
This lets us stop using the 'pointer' typedef in Xdefs.h as 'pointer'
is used throughout the X server for other things, and having duplicate
names generates compiler warnings.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Eric Anholt <eric@anholt.net>