Most of the support for such OS'es was removed in 2010 for
xorg-server-1.10.0, but a few bits lingered on, and a few
comments were left out-of-date.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1667>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
The xnfreallocarray was added along (and just as an alias to) XNFreallocarray
back a decade ago. It's just used in a few places and it's only saves us from
passing the first parameter (NULL), so the actual benefit isn't really huge.
No (known) driver is using it, so the macro can be dropped entirely.
Fixes: ae75d50395
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
The xnfallocarray was added along (and just as an alias to) XNFreallocarray
back a decade ago.
No (known) driver is using it, so the macro can be dropped entirely.
Fixes: ae75d50395
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
This has been nothing but an alias for two decades now (somewhere in R6.6),
so there doesn't seem to be any practical need for this indirection.
The macro still needs to remain, as long as (external) drivers still using it.
Fixes: ded6147bfb
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1529>
Several places using _X_ATTRIBUTE_PRINTF macro from X11/Xfuncproto.h
but missing to include it, so it depends on other headers whether it's
included by mere accident, which quickly causes trouble if include order
changes. Cleaning that up by adding explicit include statements.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1580>
Clears -Wcalloc-transposed-args warnings from gcc 14.1, such as:
../dix/main.c:165:42: warning: ‘calloc’ sizes specified with ‘sizeof’ in the
earlier argument and not in the later argument [-Wcalloc-transposed-args]
165 | serverClient = calloc(sizeof(ClientRec), 1);
| ^~~~~~~~~
../dix/main.c:165:42: note: earlier argument should specify number of
elements, later size of each element
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1606>
The header exists on all supported platform, maybe some might still need
to define _XOPEN_SOURCE to get pow() defined.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1548>
The core keyboard before this change always used a layout of "us" even if you
configured the build with another default layout.
Signed-off-by: Michael Dluhosch <michael.dluhosch@gmail.com>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1549>
This header is internal (not installed) and holds definitions for sources
in config/, thus it fells more clean moving it to config/, too.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1357>
It's cleaner to have public headers only holding public stuff and
xf86_os_support seems to be much more appropriate place for this.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1456>
Instead of relying on very indirect includes, it's more more clean when
everybody explicitly includes what he really needs.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1417>
Since we already had to rename some of them, in order to fix name clashes
on win32, it's now time to rename all the remaining ones.
The old ones are still present as define's to the new ones, just for
backwards compatibility.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1355>
Windows' native headers using some our RT_* define's names for other things.
Since the naming isn't very nice anyways, introducing some new ones
(X11_RESTYPE_NONE, X11_RESTYPE_FONT, X11_RESTYPE_CURSOR) and define the old
ones as an alias to them, in case some out-of-tree code still uses them.
With thins change, we don't need to be so extremely careful about include
ordering and have explicit #undef's in order to prevent name clashes on
Win32 targets.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1355>
These functions shouldn't be called by drivers or extensions, thus
shouldn't be exported. Also moving it to separate header, so the
already huge ones aren't cluttered with even more things.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1334>
This header isn't installed, so no external modules could use the
functions declared there. Thus we can unexport it all.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1358>
These aren't used by any drivers/modules, just DDX'es, so no need to export.
Note: tigervnc does use it, but it has it's own DDX, therefore directly
linked in, just like the in-tree DDX'es which doesn't need exporting.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1349>
The typdef and defines from dgaproc.h are used by drivers, so it needs to
remain part of the public API. But no need to clutter the public header
with non-exported function declarations.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1323>
It's just a dumb wrapper around PrivsElevated(), and also just called in few
places, while others call PrivsElevated() directly - thus not needed and
can be dropped.
Note that it's also not called by drivers, so the export was unnecessary.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1324>
With transition from autoconf to meson, these aren't actually supported
anymore, and re-adding it isn't planned. Thus the now dead code pathes
can be completely removed.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1286>
Potentially, the pointer to the mode name could be unset, this can
occur with the xf86-video-nv DDX, in that case there isnt much we can do
except check if the next mode is any better.
Signed-off-by: Yusuf Khan <yusisamerican@gmail.com>
xserver fails to generate useable resolutions with 90Hz framerate
panels(encounter the same issue with 3 different 2.5k resolution
panels). All the resolutions shown by xrandr lead to blank screen except
the one written in EDID.
Ville Syrjälä from Intel provides a method to calculate the preferred
clock and refresh rate from the existing resolution table and this
works for the issue.
v2. xf86ModeVRefresh might return 0, need to check it before use it.
v3. reported by Markus on launchpad that the issue is not devided by 0,
it's the "preferred" being accessed unconditionally.
BugLink: https://launchpad.net/bugs/1999852
Closes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1388
Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@canonical.com>
Add a workaround to accept devices of the kernel's ofdrm driver.
Makes Xorg work on Open Firmware's pre-configured display with the
DRM graphics stack.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
When you set the auto repeat rate trough xset to something like 250 40:
`xset r rate 250 40`
Is setting the first delay to 250ms and set the rate to 40hz (25ms)
However, if you were to apply this configuration from a xorg config file,
the result would be the first delay being applied correctly,
but the repeat rate would be set to 25Hz instead. This is because the config
option is using a rate of repeats per second but XKB stores it as interval.
Make sure this is converted correctly.
Fixes: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1558
Signed-off-by: EXtremeExploit <pedro.montes.alcalde@gmail.com>
The X server swapping code is a huge attack surface, much of this code
is untested and prone to security issues. The use-case of byte-swapped
clients is very niche, so let's disable this by default and allow it
only when the respective config option or commandline flag is given.
For Xorg, this adds the ServerFlag "AllowByteSwappedClients" "on".
For all DDX, this adds the commandline options +byteswappedclients and
-byteswappedclients to enable or disable, respectively.
Fixes#1201https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1029
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Commit 5145742fb6 accidentally bumped the videodrv ABI version from 26.0
to 26.6 in one go.
Change it back to 26.1 as per the documented process for minor additions.
Fixes: 5145742fb6 - randr: introduce rrCrtcGetInfo DDX function
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
This allows rrCrtcGetInfo to override the values in the XRRCrtcGetInfo
reply. One use case is to allow Xwayland to return the current emulated
mode for the specific client instead of the global mode.
Signed-off-by: Minh Phan <phanquangminh217@gmail.com>
Acked-by: Olivier Fourdan <ofourdan@redhat.com>
xf86RotateCrtcRedisplay() is about to be used outside of xf86Rotate.c in
order to copy transformed pixmaps, so fix up its interface by specifying
the source drawable and destination pixmap rather than assuming the root
drawable and rotated pixmap, respectively. In addition, add an argument to
make xf86RotateCrtcRedisplay() not perform any transformations, which is an
indicator that it should only copy a transformed pixmap rather than
actually transform a pixmap.
These changes make it possible to use xf86RotateCrtcRedisplay() to not
only copy transformed pixmaps, but also actually transform pixmaps, making
it very useful outside of xf86Rotate.c.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
The current X server infrastructure sets modesetting driver as default driver
to handle PCI-hotplug of a GPU device. This prevents the respective DDX driver
(like AMDGPU DDX driver) to take control of the card.
This patch:
- Adds a few functions and fine-tunes the GPU hotplug infrastructure to allow
the DDX driver to be loaded, if it is configured in the X config file
options as "hotplug-driver".
- Scans and updates the PCI device list before adding the new GPU device
in platform, so that the association of the platform device and PCI device
is in place (dev->pdev).
- Adds documentation of this new option
An example usage in the config file would look like:
Section "OutputClass"
Identifier "AMDgpu"
MatchDriver "amdgpu"
Driver "amdgpu"
HotplugDriver "amdgpu"
EndSection
V2:
Fixed typo in commit message (Martin)
Added R-B from Adam.
Added ACK from Alex and Martin.
V3:
Added an output class based approach for finding the DDX driver (Aaron)
Rebase
V4:
Addressed review comment from Aaron:
GPU hot-plug handling driver's name to be read from the DDX config file options.
In this way only the DDX drivers interested in handling GPU hot-plug will be
picked and loaded, for others modesetting driver will be used as usual.
V5:
Addressed review comments from Aaron:
- X config option to be listed in CamelCase.
- Indentation fix at one place.
- Code readability related optimization.
V6:
Addressed review comments from Aaron:
- Squash the doc in the same patch
- Doc formatting changes
Cc: Alex Deucher <alexander.deucher@amd.com>
Suggested-by: Aaron Plattner aplattner@nvidia.com (v3)
Acked-by: Martin Roukala martin.roukala@mupuf.org(v1)
Acked-by: Alex Deucher alexander.deucher@amd.com (v1)
Reviewed-by: Adam Jackson ajax@redhat.com(v1)
Signed-off-by: Shashank Sharma shashank.sharma@amd.com
Changes check for trying modesetting driver from if defined(__linux__)
to use meson check for if we built the driver for this platform.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
I wanted to simplify the logic, and thought this is a good opportunity
to eliminate local diffs.
I don't want to list OSes without wsfb, because I understand that is a
netbsd/openbsd driver, and always have it as a fallback for us.
Additionally, I understand "fbdev" is linux-specific, so have the logic
match this intent.
Finishes the work started in commit cd0d4c1bb5
to remove checks for the variable that never varied from 0 after the code
to change it was removed by commit 511c60bc73
in 2006 (xorg-server-1.2.0).
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Correctness is ensured be checking md5sum result before and after the
commit (it's the same).
Fixes LGTM warning "Comparison is always false because numTimings <= 0."
Signed-off-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
Put in a workaround to accept devices of the kernel's hyperv_drm
driver. Makes Xorg work on HyperV Gen 1/2 with the DRM graphics
stack.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Initially reported downstream in Gentoo. Manifests with errors like:
```
gnu/bin/ld: hw/xfree86/common/libxorg_common.a(xf86fbBus.c.o): in function `xf86ClaimFbSlot':
xf86fbBus.c:(.text+0x20): undefined reference to `sbusSlotClaimed'
/usr/lib/gcc/sparc-unknown-linux-gnu/11.2.0/../../../../sparc-unknown-linux-gnu/bin/ld: xf86fbBus.c:(.text+0x2c): undefined reference to `sbusSlotClaimed'
```
While we use the headers in meson.build, we don't reference xf86sbusBus.c
which defines the missing symbols like sbusSlotClaimed.
Bug: https://bugs.gentoo.org/828513
Signed-off-by: Sam James <sam@gentoo.org>
Add a new interface to _rrScrPriv to make it possible for the server to
delay answering a lease request, at the cost of blocking the client. This
is needed for implementing drm-lease-v1, as the Wayland protocol has no
defined time table for responding to lease requests.
Signed-off-by: Xaver Hugl <xaver.hugl@gmail.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
When switching to VT, the ioctl DRM_DROP_MASTER must be done before
the ioctl VT_RELDISP. Otherwise the kernel can't change the modesetting
reliably, and this leads to the console not showing up in some cases, like
after unplugging a docking station with a DP or HDMI monitor.
Before doing the VT_RELDISP, send a dbus message to logind, to
pause the drm device, so logind will do the ioctl DRM_DROP_MASTER.
With this patch, it changes the order logind will send the resume
event, and drm will be sent last instead of first.
so there is a also fix to call systemd_logind_vtenter() at the right time.
Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Quite a lot of applications currently expect the screen DPI exposed by
the X server to be 96 even when the real display DPI is different.
Additionally, currently Xwayland completely ignores any hardware
information and sets the DPI to 96. Accordingly the new behavior, even
if it fixes a bug, should not be enabled automatically to all users.
A better solution would be to make the default DPI stay as is and enable
the correct behavior with a command line option (maybe -dpi auto, or
similar). For now let's just revert the bug fix.
This reverts commit 05b3c681ea.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
SimpleDRM 'devices' are a fallback device, and do not have a busid
so they are getting skipped. This will allow simpledrm to work
with the modesetting driver
If there is an explicit configuration, assign the RandR provider
of the GPUDevice to the screen it was specified for.
If there is no configuration (default case) the screen number is
still 0 so it doesn't change behaviour.
The result is e.g:
# DISPLAY=:0.2 xrandr --listproviders
Providers: number : 2
Provider 0: id: 0xd2 cap: 0x2, Sink Output crtcs: 1 outputs: 1 associated providers: 0 name:modesetting
Provider 1: id: 0xfd cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 0 name:Intel
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>
xf86_platform_devices[i].pdev may be NULL in cases we fail to parse the
busid in config_udev_odev_setup_attribs() (see also [1], [2]) such as
when udev does not give use ID_PATH. This in turn leads to
platform_find_pci_info() being not called and pdev being NULL.
[1]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/993
[2]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1076
Reviewed-by: Zoltán Böszörményi <zboszor@gmail.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
This is the only place where we don't check whether
primaryBus.id.plat->pdev is not NULL before accessing its members.
It may be NULL in cases we fail to parse the busid in
config_udev_odev_setup_attribs() (see also [1], [2]) such as when udev
does not give use ID_PATH. This in turn leads to
platform_find_pci_info() being not called and pdev being NULL in one of
the items within the xf86_platform_devices array. For this to cause a
crash we only need it to become the primaryBus device.
[1]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/993
[2]: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1076
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
screenp->displays[count] (passed to configDisplay() in
configScreen()) is NULL if there is no Virtual setting
in the configuration.
Fixes: f8a6be04d0 ("xfree86: Change
displays array to pointers array to fix invalid pointer issues
after table reallocation")
Signed-off-by: Zoltán Böszörményi <zboszor@gmail.com>
Physical dimmension of display can be obtained not just by configuration or
DDC, but also directly from kernel via drmModeGetConnector(). Until now
xserver silently discarded these values even when no configuration nor EDID
were present and fallbacked to default DPI.
There are rare cases when xf86SetDepthBpp is resizing displays array in confScreen.
As that array is shared between set of ScrnInfoRec's then realloc might invalidate chached DispPtr display values in
otheres ScrnInfoRec objects.
If we will change displays array as an array of pointers to DispRec then cached DispRec pointers in ScrnInfoRec
won't be invalid after reallocation of displays array.
Signed-off-by: Łukasz Spintzyk <lukasz.spintzyk@synaptics.com>
On FreeBSD 13.0-CURRENT for PowerPC64 big-endian (BE), X was
crashing in some cases. For instance, when twm was started
and the background was clicked to open its menu, X crashed
with a segmentation fault, trying to dereference a null pointer
at CreatePicture().
There were 2 issues with xorg-server handling of RGB masks that
caused the pointer above to be null and thus the crash:
- wrong use of ffs() to get the RGB offsets from the masks
- overflow when shifting a 16-bit integer
This change fixes both issues. They happen when the system is BE
but has a video adapter using a little-endian (LE) ARGB32
framebuffer. In order to display the correct colors, this setup
requires a BE RGBA32 color format to be used by X, by setting
the RGB masks appropriately, that didn't work properly because of
the issues above.
The code path added by commit 69e4b8e6 (xfree86: attempt to autoconfig
gpu slave devices (v3)) assumes that it will only be run if the primary
device on the screen is the first device in xf86configptr->conf_device_lst.
While this is true most of the time, there are two specific cases where
this assumption fails.
First, if the first device in conf_device_lst is assigned to a different
seat than the running X server, it will be skipped by the previous
FIND_SUITABLE macro usage. Second, if the primary device was explicitly
assigned to the screen but auto_gpu_device is still set and no secondary
devices were explicitly listed, that device may not be the first device
in conf_device_lst.
When the first device in conf_device_lst is not the primary device
assigned to the screen, two problems emerge. First, the first device in
conf_device_lst will never be assigned to the screen as a secondary
device. Second, the primary device is additionally assigned to the
screen as a secondary device. The combination of these problems causes
certain otherwise valid configurations to be invalid. For example, if a
primary device is assigned to a screen and a secondary device is listed
in the config but not explicitly assigned to the screen, then one order
of the device sections results in a usable PRIME or Reverse PRIME setup
and the other order does not.
This commit removes the assumption that the primary device is the first
device in conf_device_lst by starting the loop from the start of
conf_device_lst and skipping the primary device when it is encountered.
Signed-off-by: Jacob Cherry <jcherry@nvidia.com>
This add a new flag POINTER_RAWONLY for GetPointerEvents() which does
pretty much the opposite of POINTER_NORAW.
Basically, this tells GetPointerEvents() that we only want the
DeviceChanged events and any raw events for this motion but no actual
motion events.
This is preliminary work for Xwayland to be able to use relative motion
events for raw events. Xwayland would use absolute events for raw
events, but some X11 clients (wrongly) assume raw events to be always
relative.
To allow such clients to work with Xwayland, it needs to switch to
relative raw events (if those are available from the Wayland
compositor).
However, Xwayland cannot use relative motion events for actual pointer
location because that would cause a drift over time, the pointer being
actually controlled by the Wayland compositor.
So Xwayland needs to be able to send only relative raw events, hence
this API.
Bump the ABI_XINPUT_VERSION minor version to reflect that API addition.
v2: Actually avoid sending motion events (Peter)
v3: Keep sending raw emulated events with RAWONLY (Peter)
Suggested-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Related: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1130
The definition relies on IOPortBase, which is only ever set in
hw/xfree86/os-support/bsd/arm_video.c
This caused build failures on linux/mips with GCC 10, due to this
change (from https://gcc.gnu.org/gcc-10/changes.html#c):
"GCC now defaults to -fno-common. As a result, global variable accesses
are more efficient on various targets. In C, global variables with
multiple tentative definitions now result in linker errors. With
-fcommon such definitions are silently merged during linking."
As a result anything including compiler.h would get its own definition
of IOPortBase and the linker would error out.
Commit 6a5a4e6037 removed the option to
configure useSIGIO option. Indeed, the xfree86 SIGIO support was
reworked to use internal versions of OsBlockSIGIO and OsReleaseSIGIO.
As a result, useSIGIO is no longer needed and can dropped
Fixes: 6a5a4e60 - Remove SIGIO support for input [v5]
Closes: xorg/xserver#1107
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Prabhu Sundararaj <prabhu.sundararaj@nxp.com>
Signed-off-by: Mylène Josserand <mylene.josserand@free-electrons.com>
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
With !155, the device bus ID received via udev is constructed
properly with the "usb:" prefix. But, it is not enough to
make the following line to work in Section "Device":
BusID "usb:0:1.2:1.0"
Introduce BUS_USB, so the prefix can be distinguished from BUS_PCI
and check the supplied BusID value against device->attribs->busid
in xf86PlatformDeviceCheckBusID().
Signed-off-by: Böszörményi Zoltán <zboszor@pr.hu>
This is useful for mock input drivers that control the server in
integration tests. Given that input submission happens on a different
thread than processing, it's otherwise impossible for the driver to
synchronize with the completion of the processing of submitted events.
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Most (but not all) of these were found by using
codespell --builtin clear,rare,usage,informal,code,names
but not everything reported by that was fixed.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
This option was implemented before the drivers were split in ≈2006,
and e.g. XWin still supports it.
With this commit, Xorg regains support, so that the following configuration can
be used to set the repeat rate for all keyboard devices without having to modify
Xorg command-line flags or having to automate xset(1):
Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "de"
Option "XkbVariant" "neo"
Option "AutoRepeat" "250 30"
EndSection
Signed-off-by: Michael Stapelberg <stapelberg@google.com>
xf86platformProbeDev didn't check the device path, fix it.
This is a problem when trying to set up a non-PCI device via
explicit xorg.conf.d configuration.
An USB DisplayLink device, being non-PCI was always set up
as a GPU device assigned to screen 0 instead of a regular
framebuffer, potentially having its own dedicated screen,
despite such configuration as below. Only the relevant parts
of the configuration are quoted, it's part of a larger context
with an Intel chip that has 3 outputs:
* DP1 connected to an LCD panel,
* VGA1 connected to an external monitor,
* HDMI1 unconnected and having no user visible connector
Section "ServerFlags"
Option "AutoBindGPU" "false"
EndSection
...
Section "Device"
Identifier "Intel2"
Driver "intel"
BusID "PCI:0:2:0"
Screen 2
Option "Monitor-HDMI1" "HDMI1"
Option "ZaphodHeads" "HDMI1"
EndSection
Section "Device"
Identifier "UDL"
Driver "modesetting"
Option "kmsdev" "/dev/dri/card0"
#BusID "usb:0:1.2:1.0"
Option "Monitor-DVI-I-1" "DVI-I-1"
Option "ShadowFB" "on"
Option "DoubleShadow" "on"
EndSection
...
Section "Screen"
Identifier "SCREEN2"
Option "AutoServerLayout" "on"
Device "UDL"
GPUDevice "Intel2"
Monitor "Monitor-DVI-I-1"
SubSection "Display"
Modes "1024x768"
Depth 24
EndSubSection
EndSection
Section "ServerLayout"
Identifier "LAYOUT"
Option "AutoServerLayout" "on"
Screen 0 "SCREEN"
Screen 1 "SCREEN1" RightOf "SCREEN"
Screen 2 "SCREEN2" RightOf "SCREEN1"
EndSection
On the particular machine I was trying to set up an UDL device,
I found the following structure was being used to match
the device to a platform device while I was debugging the issue:
xf86_platform_devices[0] == Intel, /dev/dri/card1, primary platform device
xf86_platform_devices[1] == UDL, /dev/dri/card0
devList[0] == "Intel0", ZaphodHeads: DP1
devList[1] == "Intel1", ZaphodHeads: VGA1
devList[2] == "UDL"
devList[3] == "Intel2", ZaphodHeads: HDMI1 (intended GPU device to UDL)
When xf86platformProbeDev() matched the UDL device, the BusID
check failed in both cases of:
* BusID "usb:0:1.2:1.0" was specified
* Option "kmsdev" "/dev/dri/card0" was specified
As a result, xf86platformProbeDev() went on to call probeSingleDevice()
with xf86_platform_devices[0] and devList[2], resulting in the
UDL device being set up as a GPU device assigned to the first screen
instead of as a framebuffer on the third screen as the configuration
specified.
Checking Option "kmsdev" in code code may be a layering violation.
But the modesetting driver is actually part of the Xorg sources
instead of being an external driver, so he "kmsdev" path knowledge
may be used here.
Signed-off-by: Böszörményi Zoltán <zboszor@pr.hu>
Since commit d8ec33fe05, an include on
glxvndabi.h has been added to hw/xfree86/common/xf86Init.c
However, if glx is disabled through --disable-glx and GLX headers are
not installed in the build's environment, build fails on:
In file included from xf86Init.c:81:
../../../include/glxvndabi.h:64:10: fatal error: GL/glxproto.h: No such file or directory
64 | #include <GL/glxproto.h>
| ^~~~~~~~~~~~~~~
Fix this failure by removing this include which does not seem to be
needed (an other option would have been to keep it under an ifdef GLXEXT
block)
Fixes:
- http://autobuild.buildroot.org/results/de838a843f97673d1381a55fd4e9b07164693913
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
New now ask Glamor to use EGL_MESA_query_driver to obtain the DRI driver
name; if successful, we use that as the DRI driver name. Following the
existing dri2.c logic, we also use the same name for the VDPAU driver,
except for i965 (and now iris), where we switch to the "va_gl" fallback.
This allows us to bypass the PCI ID lists in xserver and centralize the
driver selection mechanism inside Mesa. The hope is that we no longer
have to update these lists for any future hardware.
During startup, the xfree86 DDX's InitOutput() calls PreInit for
protocol screens first, and then GPU screens. On teardown, dix_main()
calls CloseScreen in the reverse order: GPU screens first starting with
the last one and then working backwards, and then protocol screens also
in reverse order.
InitOutput() calls ScreenInit in the wrong order: for GPU screens first and then
for protocol screens. This causes a problem for drivers that have global state
that is tied to the first screen that calls ScreenInit.
Fix this by simply re-ordering the for loops to call PreInit for
protocol screens first and then for GPU screens second.
Slightly simplifies the callers since they don't need to check for
non-NULL anymore.
I do extremely hate the workarounds here to suppress misprite taking the
cursor down though. Surely there's a better way.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This is a modified version of a patch we've been carry-ing in Fedora and
RHEL for years now. This patch automatically adds secondary GPUs to the
master as output sink / offload source making e.g. the use of
slave-outputs just work, with requiring the user to manually run
"xrandr --setprovideroutputsource" before he can hookup an external
monitor to his hybrid graphics laptop.
There is one problem with this patch, which is why it was not upstreamed
before. What to do when a secondary GPU gets detected really is a policy
decission (e.g. one may want to autobind PCI GPUs but not USB ones) and
as such should be under control of the Desktop Environment.
Unconditionally adding autobinding support to the xserver will result
in races between the DE dealing with the hotplug of a secondary GPU
and the server itself dealing with it.
However we've waited for years for any Desktop Environments to actually
start doing some sort of autoconfiguration of secondary GPUs and there
is still not a single DE dealing with this, so I believe that it is
time to upstream this now.
To avoid potential future problems if any DEs get support for doing
secondary GPU configuration themselves, the new autobind functionality
is made optional. Since no DEs currently support doing this themselves it
is enabled by default. When DEs grow support for doing this themselves
they can disable the servers autobinding through the servers cmdline or a
xorg.conf snippet.
Signed-off-by: Dave Airlie <airlied@gmail.com>
[hdegoede@redhat.com: Make configurable, fix with nvidia, submit upstream]
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
---
Changes in v2:
-Make the default enabled instead of installing a xorg.conf
snippet which enables it unconditionally
Changes in v3:
-Handle GPUScreen autoconfig in randr/rrprovider.c, looking at
rrScrPriv->provider, rather then in hw/xfree86/modes/xf86Crtc.c
looking at xf86CrtcConfig->provider. This fixes the autoconfig not
working with the nvidia binary driver
"bool" conflicts with C++ (meh) and stdbool.h (ngh alright fine). This
is a driver-visible change and will likely break the build for mach64,
but it can be fixed by simply using xf86ReturnOptValBool like every
other driver.
Signed-off-by: Adam Jackson <ajax@redhat.com>
<sys/io.h> on ARM hasn't worked for a long, long time, so it was removed
it from glibc upstream.
Remove the include to avoid a compilation failure on ARM with glibc.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Closes: https://gitlab.freedesktop.org/xorg/xserver/issues/840