public server module API headers shouldn't be clobbered with non-exported
definitions, so move them out to private header file.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1290>
public server module API headers shouldn't be clobbered with non-exported
definitions, so move them out to private header file.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1290>
PANORAMIX was the original working title of the extension, before it became
official standard. Just nobody cared about fixing the symbols to the official
naming.
For backwards compatibility with drivers, the old PANORAMIX symbol will
still be set.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1258>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
The xnfreallocarray was added along (and just as an alias to) XNFreallocarray
back a decade ago. It's just used in a few places and it's only saves us from
passing the first parameter (NULL), so the actual benefit isn't really huge.
No (known) driver is using it, so the macro can be dropped entirely.
Fixes: ae75d50395
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
The xnfallocarray was added along (and just as an alias to) XNFreallocarray
back a decade ago.
No (known) driver is using it, so the macro can be dropped entirely.
Fixes: ae75d50395
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Fixes: ded6147bfb
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
The core keyboard before this change always used a layout of "us" even if you
configured the build with another default layout.
Signed-off-by: Michael Dluhosch <michael.dluhosch@gmail.com>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1549>
Instead of relying on very indirect includes, it's more more clean when
everybody explicitly includes what he really needs.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1417>
The X server swapping code is a huge attack surface, much of this code
is untested and prone to security issues. The use-case of byte-swapped
clients is very niche, so let's disable this by default and allow it
only when the respective config option or commandline flag is given.
For Xorg, this adds the ServerFlag "AllowByteSwappedClients" "on".
For all DDX, this adds the commandline options +byteswappedclients and
-byteswappedclients to enable or disable, respectively.
Fixes#1201https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1029
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
screenp->displays[count] (passed to configDisplay() in
configScreen()) is NULL if there is no Virtual setting
in the configuration.
Fixes: f8a6be04d0 ("xfree86: Change
displays array to pointers array to fix invalid pointer issues
after table reallocation")
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>
There are rare cases when xf86SetDepthBpp is resizing displays array in confScreen.
As that array is shared between set of ScrnInfoRec's then realloc might invalidate chached DispPtr display values in
otheres ScrnInfoRec objects.
If we will change displays array as an array of pointers to DispRec then cached DispRec pointers in ScrnInfoRec
won't be invalid after reallocation of displays array.
Signed-off-by: Łukasz Spintzyk <lukasz.spintzyk@synaptics.com>
The code path added by commit 69e4b8e6 (xfree86: attempt to autoconfig
gpu slave devices (v3)) assumes that it will only be run if the primary
device on the screen is the first device in xf86configptr->conf_device_lst.
While this is true most of the time, there are two specific cases where
this assumption fails.
First, if the first device in conf_device_lst is assigned to a different
seat than the running X server, it will be skipped by the previous
FIND_SUITABLE macro usage. Second, if the primary device was explicitly
assigned to the screen but auto_gpu_device is still set and no secondary
devices were explicitly listed, that device may not be the first device
in conf_device_lst.
When the first device in conf_device_lst is not the primary device
assigned to the screen, two problems emerge. First, the first device in
conf_device_lst will never be assigned to the screen as a secondary
device. Second, the primary device is additionally assigned to the
screen as a secondary device. The combination of these problems causes
certain otherwise valid configurations to be invalid. For example, if a
primary device is assigned to a screen and a secondary device is listed
in the config but not explicitly assigned to the screen, then one order
of the device sections results in a usable PRIME or Reverse PRIME setup
and the other order does not.
This commit removes the assumption that the primary device is the first
device in conf_device_lst by starting the loop from the start of
conf_device_lst and skipping the primary device when it is encountered.
Signed-off-by: Jacob Cherry <jcherry@nvidia.com>
This is a modified version of a patch we've been carry-ing in Fedora and
RHEL for years now. This patch automatically adds secondary GPUs to the
master as output sink / offload source making e.g. the use of
slave-outputs just work, with requiring the user to manually run
"xrandr --setprovideroutputsource" before he can hookup an external
monitor to his hybrid graphics laptop.
There is one problem with this patch, which is why it was not upstreamed
before. What to do when a secondary GPU gets detected really is a policy
decission (e.g. one may want to autobind PCI GPUs but not USB ones) and
as such should be under control of the Desktop Environment.
Unconditionally adding autobinding support to the xserver will result
in races between the DE dealing with the hotplug of a secondary GPU
and the server itself dealing with it.
However we've waited for years for any Desktop Environments to actually
start doing some sort of autoconfiguration of secondary GPUs and there
is still not a single DE dealing with this, so I believe that it is
time to upstream this now.
To avoid potential future problems if any DEs get support for doing
secondary GPU configuration themselves, the new autobind functionality
is made optional. Since no DEs currently support doing this themselves it
is enabled by default. When DEs grow support for doing this themselves
they can disable the servers autobinding through the servers cmdline or a
xorg.conf snippet.
Signed-off-by: Dave Airlie <airlied@gmail.com>
[hdegoede@redhat.com: Make configurable, fix with nvidia, submit upstream]
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
---
Changes in v2:
-Make the default enabled instead of installing a xorg.conf
snippet which enables it unconditionally
Changes in v3:
-Handle GPUScreen autoconfig in randr/rrprovider.c, looking at
rrScrPriv->provider, rather then in hw/xfree86/modes/xf86Crtc.c
looking at xf86CrtcConfig->provider. This fixes the autoconfig not
working with the nvidia binary driver
The old code would not in fact validate the option value, though it
might complain about it in the log. It also didn't let you set some
legal values that the -maxclients command line option would.
Signed-off-by: Adam Jackson <ajax@redhat.com>
[... but leave it defined and exported, since we're ABI-frozen - ajax]
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Tested-by: Ben Crocker <bcrocker@redhat.com>
restore abi
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
v4: Check whether the RANDR private key is initialized before trying to
use it to look up the screen private.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Tsk. This broke vesa for me, the rrGetScrPriv in InitOutput will crash
if randr's screen private key hasn't been initialized yet. That seems
dumb, but let's not leave it broken.
This reverts commit c08d7c1cdd.
The only way to get at xf86Info.disableRandR from configuration is
Option "RANDR" "foo" in ServerFlags, which probably nobody is using
seeing as it's not documented. The other way it could be set is if a
screen supports RANDR 1.2, in which case we set it to avoid trying to
use the RANDR 1.1 compat code. If the second screen is not 1.2-aware
then this would mean we don't do RANDR setup on the second screen at
all, which would almost certainly crash the first time you try to do
RANDR operations on the second screen.
Fix that all by deletion, and just check whether the screen already has
RANDR initialized before installing the stub support. If you want to
disable RANDR, use the Extensions section of xorg.conf instead.
v2: Also remove a now entirely pointless log message, telling you to
ignore a line we will no longer print.
v3: Explain the fallback path in InitOutput. (Keith Packard)
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
gcc -std=c99 does not define the former, and it's a horrible namespace
confusion anyway.
Signed-off-by: Julien Cristau <jcristau@debian.org>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Tested-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
Most of this is a legacy of the old "extmod" design where you could load
_some_ extensions dynamically but only if the server had been built with
support for them in the first place.
Note that since we now only initialize the DPMS extension if at least
one screen supports it, we no longer need DPMSCapableFlag: if it would
be false, we would never read its value.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
There's really no reason to pretend to support this, apps hate it, all
we're doing is giving people a way to injure themselves. It doesn't work
anyway with any Radeon, any NVIDIA chip, or any Intel chip since i810.
Rip out all the logic for handling 24bpp pixmaps and framebuffers, and
silently ignore the old options that would ask for it.
The cirrus alpine driver has been updated to default to 16bpp, and both
it and the i810 driver can now use the 32->24 conversion code in shadow
if they want. All other drivers support 32bpp. Configurations that
explicitly request 24bpp in order to fit in VRAM will be broken now
though.
v2: Fix command line options to silently ignore 24bpp rather than fail
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Just no.
The ddxDesign chunk removes the whole para about xf86FixPciResource,
since it turns out that function doesn't exist at all anymore.
The only drivers that reference this at all are i128 and mga, and even
then only in the non-pciaccess path.
v2:
- Update commentary about i128/mga
- Don't remove the BiosBase keyword from the config parser since that
would turn a no-op into a fatal error (Aaron Plattner)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Callers only ever use this for a single directory anyway.
While we're at it, also move xf86DriverListFromCompile near its only
user in the X -configure code (and inline it out of existence), and
remove LoaderFreeDirList as it's unused (since X -configure is just
going to exit anyway, none of that code cares about cleanup).
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
There's no reason a driver should ever care about this.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The option is misleading and using it leads to disabling both direct and
accelerated indirect GLX. In such cases the xserver GLX attempts to
match DRISW (IGLX) configs with the DRI2/3 ones (direct GLX) leading to
all sorts of fun experience.
Remove the option until we get a clear split and control over direct vs
indirect GLX.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
The intent here was that fallback drivers would be at the end of the
list in order, but if a fallback driver happened to be at the end of the
list already that's not what would happen. Rather than open-code
something smarter, just use qsort.
Note that qsort puts things in ascending order, so somewhat backwardsly
fallbacks are greater than native drivers, and vesa is greater than
modesetting.
v2: Use strcmp to compare non-fallback drivers so we get a predictable
result if your libc's qsort isn't stable (Keith Packard)
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Not visible in the patch, but the same stanza is repeated below inside
the #ifdef GLXEXT. There's no reason to bother with checking it if we
built without GLXEXT so remove the unconditional one.
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This removes all of the SIGIO handling support used for input
throughout the X server, preparing the way for using threads for input
handling instead.
Places calling OsBlockSIGIO and OsReleaseSIGIO are marked with calls
to stub functions input_lock/input_unlock so that we don't lose this
information.
xfree86 SIGIO support is reworked to use internal versions of
OsBlockSIGIO and OsReleaseSIGIO.
v2: Don't change locking order (Peter Hutterer)
v3: Comment weird && FALSE in xf86Helper.c
Leave errno save/restore in xf86ReadInput
Squash with stub adding patch (Peter Hutterer)
v4: Leave UseSIGIO config parameter so that
existing config files don't break (Peter Hutterer)
v5: Split a couple of independent patch bits out
of kinput.c (Peter Hutterer)
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Not all display managers make it easy (or possible) to modify the
command line flags passed to the server, so add a way to get to it from
xorg.conf.
v2: Fix the FlagOptions list to not have IGLX after the terminator (Alan
Coopersmith)
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This applies regardless of which DRI you're asking for. Worse, leaving
it out means breaking the config file syntax in a pointless way, since
non-DRI servers can safely just parse it and ignore it.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Make the maximum number of clients user configurable, either from the command
line or from xorg.conf
This patch works by using the MAXCLIENTS (raised to 512) as the maximum
allowed number of clients, but allowing the actual limit to be set by the
user to a lower value (keeping the default of 256).
There is a limit size of 29 bits to be used to store both the client ID and
the X resources ID, so by reducing the number of clients allowed to connect to
the X server, the user can increase the number of X resources per client or
vice-versa.
Parts of this patch are based on a similar patch from Adam Jackson
<ajax@redhat.com>
This now requires at least xproto 7.0.28
Signed-off-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Michel pointed out I broke Zaphod with the initial auto add
gpu devices change,
Fix this, by only auto adding GPU devices if we are screen 0
and there are no other screens in the layout. Anyone who
wants to assign GPU devices can specify it in the xorg.conf
for this use case.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
This allows us to skip the screen section, the first
Device section will get assigned to the screen,
any remaining ones will get assigned to the GPUDevice
sections for the screen.
v2: fix the skipping unsuitable screen logic (Aaron)
v3: fix segfault if not conf file (me, 5s after sending v2)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This allows gpu devices to be specified in xorg.conf Screen sections.
Section "Device"
Driver "intel"
Identifier "intel0"
Option "AccelMethod" "uxa"
EndSection
Section "Device"
Driver "modesetting"
Identifier "usb0"
EndSection
Section "Screen"
Identifier "screen"
Device "intel0"
GPUDevice "usb0"
EndSection
This should allow for easier tweaking of driver options which
currently mess up the GPU device discovery process.
v2: add error handling for more than 4 devices, (Emil)
fixup CONF_ defines to consistency
add MAX_GPUDEVICES define
(yes there is two defines, this is consistent
with everywhere else).
remove braces around slp (Mark Kettenis)
man page fixups (Aaron)
v2.1: fixup whitespace (Aaron)
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's going to multiply anyway, so if we have non-constant values, might
as well let it do the multiplication instead of adding another multiply,
and good versions of calloc will check for & avoid overflow in the process.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Remove the error return path from the FLAG_PIXMAP path and leave the
default value in place. There's no point skipping the rest of this
function.
Signed-off-by: Keith Packard <keithp@keithp.com>