This touches everything that ends up in the Xorg binary; the big missing
part is GLX since that's all generated code. Cuts about 14k from the
binary on amd64.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
First, move them to the end of the struct, for marginally better cache
locality for the struct members that actually have meaning; move the
existing slots at the end of the struct up near some others with similar
meanings. Second, only keep four slots each of integer, data pointer,
and function pointer; we've rarely used this escape hatch so this is
still plenty.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Never set by the core, not used in any modern driver.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Just no.
The ddxDesign chunk removes the whole para about xf86FixPciResource,
since it turns out that function doesn't exist at all anymore.
The only drivers that reference this at all are i128 and mga, and even
then only in the non-pciaccess path.
v2:
- Update commentary about i128/mga
- Don't remove the BiosBase keyword from the config parser since that
would turn a no-op into a fatal error (Aaron Plattner)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Seriously not worth the effort of tracking this, especially now that
competent drivers don't have a limit. The sis driver does inspect this
member, but hilariously does so only so it can print the same information
as the core does.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Only mach64 and rendition actually use this feature. Everyone else just
checks it in their ValidMode hook, they can too.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
We don't actually need (or intend) to keep this struct the same across
revisions.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Acked-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Nobody was ever calling this with a non-null argument for subdir list or
pattern list. Having done this, InitSubdirs is only ever called with a
NULL argument, so it's really just a complicated way of duplicating the
default list; we can remove that and just walk the list directly.
The minor error code was only ever used to distinguish among two cases
of LDR_BADUSAGE. Whatever.
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Callers only ever use this for a single directory anyway.
While we're at it, also move xf86DriverListFromCompile near its only
user in the X -configure code (and inline it out of existence), and
remove LoaderFreeDirList as it's unused (since X -configure is just
going to exit anyway, none of that code cares about cleanup).
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
There's no reason a driver should ever care about this.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
indent(1) gets confused by function-like macros with no trailing
semicolon, which is fair enough really.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The idea here is that the driver might have once been old enough to not
have the driverFunc slot in DriverRec, with the module ABI not having
changed when it was added. That was ages ago, and drivers always declare
themselves with DriverRec not DriverRec1, so uninitialized slots will
simply be zero.
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Everybody using this functionality specifies a major version, which
makes sense. If you don't care about a minor version, that's equivalent
to saying you require minor >= 0, so just say so; likewise patch level.
Likewise ABI class is always specified.
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
The enum has been unused since at least the removal of elfloader.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This looks like more, but only if you don't compare it to the number
pulled in by misc.h.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This API is dumb. uname(3) exists, feel free to use it, but ideally
write to the interface not to the OS. There are a couple of drivers
using this API, they could all reasonably just not.
This also removes the OS name from the loader subdirectory path search.
Having /usr/lib/xorg shared across OSes is a non-goal here.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Similar to its little brother - LoadSubModule. Currently all call sites
provide NULL anyway ;-)
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Allow OutputClass config snippets to modify the module-path.
Note that any specified ModulePaths will be pre-pended to the normal
ModulePath. The idea behind this is that any output hardware specific
modules should have preference over the normal modules.
One use-case for this is the nvidia binary driver, this allows a
config snippet like this:
Section "OutputClass"
MatchDriver "nvidia"
Modulepath "/usr/lib64/nvidia/modules"
EndSection
To get the nvidia glx specific glx module loaded, but only when the
nvidia kernel driver is loaded.
Together with the glvnd work done recently, this allows the nouveau
+ mesa and nvidia-binary userspace stacks to co-exist on the same
system without any ldconfig / xorg.conf tweaking and the xserver will
automatically do the right thing depending on which kernel driver
(nouveau or nvidia) is loaded.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Allow using:
Option "PrimaryGPU" "yes"
In an OutputClass section to override the default primary GPU device
selection which selects the GPU used as output by the firmware.
If multiple output devices match an OutputClass section with
the PrimaryGPU option set, the first one enumerated becomes the
primary GPU.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This is a preparation patch for allowing an OutputClass section to
override the default primary GPU device selection.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Add support for setting options in OutputClass Sections and having these
applied to any matching output devices.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Make OutputClassMatches directly take a xf86_platform_device as argument,
rather then an index into xf86_platform_devices. This makes things
easier for callers which already have a xf86_platform_device pointer.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
xf86MatchDevice returns a dynamically allocated list of GDevPtr-s,
free this when we're done with it.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
In InitOutput, if xf86HandleConfigFile returns CONFIG_NOFILE
(which it does if no config file or directory is present), the
autoconfig flag is set, causing xf86AutoConfig to be called
later on.
xf86AutoConfig calls xf86OutputClassDriverList via the
call tree:
xf86AutoConfig =>
listPossibleVideoDrivers =>
xf86PlatformMatchDriver =>
xf86OutputClassDriverList
and xf86OutputClassDriverList attempts to traverse a linked list
that is a member of the XF86ConfigRec struct pointed to by the
global xf86configptr, which is NULL at this point because the
XF86ConfigRec struct is only allocated (by xf86readConfigFile)
AFTER the config file and directory have been successfully
opened; the CONFIG_NOFILE return from xf86HandleConfigFile
occurs BEFORE the call to xf86readConfigFile which allocates
the XF86ConfigRec struct.
Rx: In read.c (for symmetry with xf86freeConfig, which already
appears in this file), add a new function xf86allocateConfig
which tests the value of xf86configptr and, if it's NULL,
allocates the XF86ConfigRec struct and deposits the pointer
in xf86configptr. In xf86Parser.h, add a prototype for the
new xf86allocateConfig function.
Back in read.c, #include "xf86Config.h". In xf86readConfigFile,
change the open-code call to calloc to a call to the new
xf86allocateConfig function.
In xf86AutoConfig.c, add a call to the new xf86allocateConfig function
to the beginning of xf86AutoConfig to make sure the XF86ConfigRec struct
is allocated.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Ben Crocker <bcrocker@redhat.com>
If we did not find any non GPU Screens, try again ignoring the notion
of any video devices being the primary device. This fixes Xorg exiting
with a "no screens found" error when using virtio-vga in a
virtual-machine and when using a device driven by simpledrm.
This is a somewhat ugly solution, but it is the best I can come up with
without major surgery to the bus and probe code.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This is primarily a preparation patch for fixing the xserver exiting with
a "no screens found" error even though there are supported video cards,
due to the server not recognizing any card as the primary card.
This also fixes the (mostly theoretical) case of a platformBus capable
driver adding a device as GPUscreen before a driver which only supports
the old PCI probe method gets a chance to claim it as a normal screen.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
If foundScreen is TRUE, then all the code below the removed if
will not execute until we reach the return foundScreen; at the
end, so this entire if block is redundant.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Special case for the systemd-logind case in xfree86: when we're vt-switched
away and a device is plugged in, we get a paused fd from logind. Since we
can't probe the device or do anything with it, we store that device in the
xfree86 and handle it later when we vt-switch back. The device is not added to
inputInfo.devices until that time.
When the device is removed while still vt-switched away, the the config system
never notifies the DDX. It only runs through inputInfo.devices and our device
was never added to that.
When a device is plugged in, removed, and plugged in again while vt-switched
away, we have two entries in the xfree86-specific list that refer to the same
device node, both pending for addition later. On VT switch back, the first one
(the already removed one) will be added successfully, the second one (the
still plugged-in one) fails. Since the fd is correct, the device works until
it is removed again. The removed devices' config_info (i.e. the syspath)
doesn't match the actual device we addded tough (the input number increases
with each plug), it doesn't get removed, the fd remains open and we lose track
of the fd count. Plugging the device in again leads to a dead device.
Fix this by adding a call to notify the DDX to purge any remainders of devices
with the given config_info, that's the only identifiable bit we have at this
point.
https://bugs.freedesktop.org/show_bug.cgi?id=97928
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
No functional changes but it makes it easier to remove elements from the
middle of the list (future patch).
We don't have an init call into this file, so the list is manually
initialized.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
They're identically laid-out structs but let's use the right type to search
for our desired value.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
The option is misleading and using it leads to disabling both direct and
accelerated indirect GLX. In such cases the xserver GLX attempts to
match DRISW (IGLX) configs with the DRI2/3 ones (direct GLX) leading to
all sorts of fun experience.
Remove the option until we get a clear split and control over direct vs
indirect GLX.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
This fix is for the following xorg.conf can work:
Section "ServerFlags"
Option "AutoAddGPU" "off"
EndSection
Section "Device"
Identifier "Amd"
Driver "ati"
BusID "PCI:1:0:0"
EndSection
Section "Device"
Identifier "Intel"
Driver "modesetting"
BusID "pci:0:2:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Intel"
GPUDevice "Amd"
EndSection
Without AutoAddGPU off, modesetting DDX will also be loaded
for GPUDevice.
Signed-off-by: Qiang Yu <Qiang.Yu@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
The new platform bus code and the old PCI bus code overlap. Platform bus
can handle any type of device, including PCI devices, whereas the PCI code
can only handle PCI devices. Some drivers only support the old style
PCI-probe methods, but the primary device detection code is server based,
not driver based; so we might end up with a primary device which only has
a PCI bus-capable driver, but was detected as primary by the platform
code, or the other way around.
(The above paragraph was shamelessly stolen from Hans de Goede, and
customized.)
The latter case applies to QEMU's virtio-gpu-pci device: it is detected as
a BUS_PCI primary device, but we actually probe it first (with the
modesetting driver) through xf86platformProbeDev(). The
xf86IsPrimaryPlatform() function doesn't recognize the device as primary
(it bails out as soon as it sees BUS_PCI); instead, we add the device as a
secondary graphics card under "autoAddGPU". In turn, the success of this
automatic probing-as-GPU prevents xf86CallDriverProbe() from proceeding to
the PCI probing.
The result is that the server exits with no primary devices detected.
Commit cf66471353 ("xfree86: use udev to provide device enumeration for
kms devices (v10)") added "cross-bus" matching to xf86IsPrimaryPci(). Port
that now to xf86IsPrimaryPlatform(), so that we can probe virtio-gpu-pci
as a primary card in platform bus code.
Cc: Adam Jackson <ajax@redhat.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Keith Packard <keithp@keithp.com>
Cc: Marcin Juszkiewicz <mjuszkiewicz@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-By: Marcin Juszkiewicz <mjuszkiewicz@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Xorg -configure relies on the bus implementation, e.g.
xf86pciBus.c to call xf86AddBusDeviceToConfigure(). The new
xf86platformBus code does not have support for this.
Almost all drivers support both the xf86platformBus and xf86pciBus
nowadays, and the generic xf86Bus xf86CallDriverProbe() function
prefers the new xf86platformBus probe method when available.
Since the platformBus paths do not call xf86AddBusDeviceToConfigure()
this results in Xorg -configure failing with the following error:
"No devices to configure. Configuration failed.".
Adding support for the xf86Configure code to xf86platformBus.c
is non trivial and since we advise users to normally run without
any Xorg.conf at all not worth the trouble.
However some users still want to use Xorg -configure to generate a
template config file, this commit implements a minimal fix to make
things work again for PCI devices by skipping the platform
probe method when xf86DoConfigure is set.
This has been tested on a system with integrated intel graphics,
with both the intel and modesetting drivers and restores Xorg -configure
functionality on both cases.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This loop was written in a buggy style, causing a NULL driver ptr to be
passed to copyScreen(). copyScreen() only uses that to generate an
identifier string, so this is mostly harmless on systems that accept
NULL for asprintf() "%s" format. (the generated identifiers are off
by one wrt the driver names and the last one contains NULL.
For systems that don't accept NULL for '%s' this would cause a
segmentation fault when this code is used (no xorg.conf, but partial
config in xorg.conf.d for instance).
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
Reviewed-by: Keith Packard <keithp@keithp.com>
In function xf86VGAarbiterScrnInit when the "pEnt->bus.type" is
BUS_PLATFORM, the "pScrn->vgaDev" won't be set, so the "pScrn->vgaDev" is
equal to zero.
The variable "rsrc_decodes" in function "xf86VGAarbiterAllowDRI" is not
initialized. So it will occur error when "pScrn->vgaDev == 0", and
"vga_count > 1". For this case, as "pScrn->vgaDev == 0", the function
"pci_device_vgaarb_get_info" will only set the value of "vga_count",
but won't set the value of "rsrc_decodes", so it will has two different
return values for function "xf86VGAarbiterAllowDRI" in different
platforms. One platform will return TRUE, as the "rsrc_decodes" 's
default value is 0, but another platform will return FALSE, as the
"rsrc_decodes" 's default value is "32767", this will cause disable
direct rendering.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96937
Signed-off-by: Emily Deng <Emily.Deng@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Any code called from the driver ScreenInit may want to refer to
pScrn->pScreen. As the function passed to AddScreen is the first place
the DDX sees a new screen, the generic code needs to make sure that
value is set before passing control to the video driver's
initialization code.
This was found by running a driver which didn't bother to set this
value when the initial colormap was installed; xf86RandR12LoadPalette
tried to use pScrn->pScreen and crashed.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97124
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-and-Tested-by: Michel Dänzer <michel.daenzer@amd.com>
This is a problem for the libinput driver that uses the same context across
multiple devices. The driver may be halfway through setting up an input device
(and the only way to do so is to add it to libinput) when the input thread
comes in an reads events. This then causes mayhem when data is dereferenced
that hasn't been set up yet.
In my case the cause was the call to libinput_path_remove_device() inside
preinit racing with evdev_dispatch_device() handling of ENODEV. The sequence
was:
- thread 2 gets an event and calls evdev_dispatch_device()
- thread 1 calls libinput_path_remove_device() which sets the device->source
to NULL
- thread 2 reads from the fd, gets ENODEV and now removes the device->source,
dereferencing the null-pointer
This is the one I could reproduce the most, but there are other potential
pitfalls that affect any driver that uses the same fd for multiple devices.
Avoid all this and wrap PreInit into the lock.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
If a device couldn't be enabled we left the lock hanging.
This patch also removes the leftover OsReleaseSignals() call, now unnecessary.
Note that input_unlock() is later than previously OsReleaseSignals().
RemoveDevice() manipulates the input device and its file descriptors, it's
safer to put the input_unlock() call after RemoveDevice() to avoid events
coming in while the device is being removed.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
Instead of breaking the former when the driver supports the latter,
hook them up so that the hardware LUTs reflect the combination of the
current colourmap and gamma states. I.e. combine the colourmap, the
global gamma value/ramp and the RandR 1.2 per-CRTC gamma ramps into one
combined LUT per CRTC.
Fixes e.g. gamma sliders not working in games.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=27222
v2:
* Initialize palette_size and palette struct members, fixes crash on
server startup.
v3:
* Free randrp->palette in xf86RandR12CloseScreen, fixes memory leak.
v4:
* Call CMapUnwrapScreen if xf86RandR12InitGamma fails (Emil Velikov).
* Still allow xf86HandleColormaps to be called with a NULL loadPalette
parameter in the xf86_crtc_supports_gamma case.
v5:
* Clean up inner loops in xf86RandR12CrtcComputeGamma (Keith Packard)
* Move palette update out of per-CRTC loop in xf86RandR12LoadPalette
(Keith Packard)
v6:
* Handle reallocarray failure in xf86RandR12LoadPalette (Keith Packard)
Reviewed-by: Keith Packard <keithp@keithp.com>
ATTR_KEY maps to ID_INPUT_KEY which is set for any device with keys.
ID_INPUT_KEYBOARD and thus ATTR_KEYBOARD is set for devices that are actual
keyboards (and have a set of expected keys).
Hand-written match rules may only apply ID_INPUT_KEYBOARD, so make sure we
match on that too.
Arguably we should've been matching on ATTR_KEYBOARD only all along but
changing that likely introduces regressions.
Reported-by: Marty Plummer <netz.kernel@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This removes the last uses of fd_set from the server interfaces
outside of the OS layer itself.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
With no users of the interface needing the readmask anymore, we can
remove it from the argument passed to these functions.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Remove code in xf86Wakeup for dealing with other input and switch to
using the new NotifyFd interface.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>