Commit cb1b1e184 modified msSharePixmapBacking() to derive modesettingPtr from
the 'screen' argument. Unfortunately, the name of the argument is misleading --
the screen is the slave screen. If the master is modesetting,
and the slave is not modesetting, it will segfault.
To fix the problem, this change derives modesettingPtr from
ppix->drawable.pScreen. This method is already used when calling
ms->glamor.shareable_fd_from_pixmap() later in the function.
To avoid future issues, this change also renames the 'screen' argument to
'slave'.
Signed-off-by: Alex Goins <agoins@nvidia.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
New now ask Glamor to use EGL_MESA_query_driver to obtain the DRI driver
name; if successful, we use that as the DRI driver name. Following the
existing dri2.c logic, we also use the same name for the VDPAU driver,
except for i965 (and now iris), where we switch to the "va_gl" fallback.
This allows us to bypass the PCI ID lists in xserver and centralize the
driver selection mechanism inside Mesa. The hope is that we no longer
have to update these lists for any future hardware.
If a new rotate buffer was allocated. This makes sure the new buffer
has valid transformed contents when it starts being displayed.
Reviewed-by: Adam Jackson <ajax@redhat.com>
This makes sure any pending drawing to a new scanout buffer will be
visible from the start.
This makes the finish call in drmmode_copy_fb superfluous, so remove it.
Reviewed-by: Adam Jackson <ajax@redhat.com>
isastream() was never more than a stub in glibc, and was removed in
glibc-2.30 by commit a0a0dc83173c ("Remove obsolete, never-implemented
XSI STREAMS declarations").
Bug: https://bugs.gentoo.org/700838
Reviewed-by: Julien Cristau <jcristau@debian.org>
Signed-off-by: Matt Turner <mattst88@gmail.com>
Now that we've fixed LoaderSymbolFromModule this should work properly.
This reverts commit 5c7c6d5cff.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
The thing you get back from xf86LoadSubModule is a ModuleDescPtr, not a
dlsym handle. We don't expose ModuleDescPtr to the drivers, so change
LoaderSymbolFromModule to cast its void * argument to a ModuleDescPtr.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
During startup, the xfree86 DDX's InitOutput() calls PreInit for
protocol screens first, and then GPU screens. On teardown, dix_main()
calls CloseScreen in the reverse order: GPU screens first starting with
the last one and then working backwards, and then protocol screens also
in reverse order.
InitOutput() calls ScreenInit in the wrong order: for GPU screens first and then
for protocol screens. This causes a problem for drivers that have global state
that is tied to the first screen that calls ScreenInit.
Fix this by simply re-ordering the for loops to call PreInit for
protocol screens first and then for GPU screens second.
ms_present_get_crtc() returns an RRCrtcPtr, but derives it from a xf86CrtcPtr
found via ms_dri2_crtc_covering_drawable()=>ms_covering_crtc(). As a result, it
depends on all associated DIX ScreenRecs having an xf86CrtcConfigPtr DDX
private.
Some DIX ScreenRecs don't have an xf86CrtcConfigPtr DDX private, but do have an
rrScrPrivPtr DDX private. Given that we can derive all of the information we
need from RandR, we can support these screens by avoiding the use of xf86Crtc.
This change implements an RandR-based path for ms_present_get_crtc(), allowing
drawables to successfully fall back to syncing to the primary output, even if
the slave doesn't have an xf86CrtcConfigPtr DDX private.
Without this change, if a slave doesn't have an xf86CrtcConfigPtr DDX private,
drawables will fall back to 1 FPS if they overlap an output on that slave.
Signed-off-by: Alex Goins <agoins@nvidia.com>
DIX ScreenRecs don't necessarily have an xf86CrtcConfigPtr DDX private.
ms_covering_crtc() assumes that they do, which can result in a segfault.
Update ms_covering_crtc() to check the XF86_CRTC_CONFIG_PTR() returned pointer
before dereferencing it. This will still mean that ms_covering_crtc() can't fall
back to the primary output when a drawable overlaps a slave output (going to the
1 FPS default instead), but it won't segfault.
Signed-off-by: Alex Goins <agoins@nvidia.com>
ms_covering_crtc() uses RRFirstOutput() to determine a primary output to fall
back to if a drawable is overlapping a slave output.
If the primary output is a slave output, RRFirstOutput() will return a slave
output even if passed a master ScreenPtr. ms_covering_crtc() dereferences the
output's devPrivate, which is invalid for non-modesetting outputs, and can
crash.
Changing RRFirstOutput() could have unintended side effects for other callers,
so this change replaces the call to RRFirstOutput() with ms_first_output().
ms_first_output() ignores the primary output if it doesn't match the given
ScreenPtr, choosing the first connected output instead.
Signed-off-by: Alex Goins <agoins@nvidia.com>
Slightly simplifies the callers since they don't need to check for
non-NULL anymore.
I do extremely hate the workarounds here to suppress misprite taking the
cursor down though. Surely there's a better way.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Populate outout possible_crtcs as the union of possible_crtcs from
the encoders rather than the intersection. Otherwise we're easily left
with possible_crtcs==0 when all the possible encoders have
non-overlapping possible_crtcs.
No idea what the magic 0x7f is about, but keep it around in case
it matters.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Currently we parse through xf86Info.debug to check if we the modifiers
should be disabled. Handle that within DDX and pass GLAMOR_NO_MODIFIERS
into the glamor_init() flags.
This allows individual DDX control over the setting - say when modifiers
are woking OK with one implementation and not the other.
Most importantly, this removes the final xf86 piece from the codebase.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Move from the xf86 specific ScrnInfoRec::privates, to the dix private
handling. Since there's no FreeScreen function in ScreenPtr, fold the
former within the existing CloseScreen.
Users, such as modesetting are updated, and out of tree drivers will
need equivalent, yet trivial, patch.
Note: we need to ensure that the screen private is unset and the screen
callbacks are restored in our CloseScreen function.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
The autoconf build for the modesetting driver still relied on
xorg-macros.m4 for string replacements and did not include the
top-level manpages.am. As a result, no substitutions took place after
commit 2e497bf887.
This should be a candidate for the 1.20 branch.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
There's not really a good reason to keep these separate, the vbe code
requires int10 and is not very large. This change eliminates the
build-time options for vbe; if you build int10, you get vbe.
Gitlab: https://gitlab.freedesktop.org/xorg/xserver/issues/692
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
i965 and radeonsi, respectively, are the drivers that have been
receiving new hardware support. It's really silly to need to update the
server side to know specific new devices IDs every time a new ASIC comes
out.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This might be an error or not, for example refusing to work on llvmpipe
is normal and expected. glamor_egl_init() will print X_ERROR messages if
appropriate, so we don't need to here.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This fixes 'non-desktop' displays staying powered on after their lease
has been revoked.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111620
Cc: Keith Packard <keithp@keithp.com>
Signed-off-by: Andres Rodriguez <andresx7@gmail.com>
The atomic driver has issues with modesetting when stealing
connectors from a different crtc, a black screen when doing rotation
on a different crtc, and in general is just a mapping of the legacy
helpers to atomic. This is already done in the kernel, so just
fallback to legacy by default until this is fixed.
Please backport to 1.20, as we don't want to enable it for everyone
there. It breaks for existing users.
The fixes to make the xserver more atomic have been pending on the
mailing list for ages.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110375
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110030
References: https://gitlab.freedesktop.org/xorg/xserver/merge_requests/36/commits
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Dynamically added outputs should have their properties
properly updated as well. Otherwise we're left with an output
with many of its propeties not exposed.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Michel Dänzer <michel@daenzer.net>
Only log 1 error for consecutive flip failures, instead of filling the
log and the disk with errors for each attempted flip.
Despite our best efforts we may end up with a BO which gets refused
when we try to import it as a framebuffer, see e.g. :
https://bugs.freedesktop.org/show_bug.cgi?id=111306
This should not happen, but as the above bugs shows sometimes it does
and chances are it will happen again.
Note ideally we should check if the import is possible at
ms_present_check_flip time, like the amdgpu code is doing since:
https://gitlab.freedesktop.org/xorg/driver/xf86-video-amdgpu/merge_requests/35
but that requires a chunk of refactoring work on the modesetting driver,
so for now this will have to do.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Before this commit ms_do_pageflip logged a single error for both the
drmmode_bo_import failure path as well as for the queue_flip_on_crtc
path. This commit splits this into 2 separate error logs so that it is
clear what the cause of the flip-failure is.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Currently on present-flip failures we log 2 messages for each failure,
1 from ms_do_pageflip and then another one from ms_present_flip which
is the caller of ms_do_pageflip. This commit adds a log_prefix argument
to ms_do_pageflip so that its log messages can show if it is a DRI2 or
a Present flip which fails and removes the redundant error message from
ms_present_flip.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This is a modified version of a patch we've been carry-ing in Fedora and
RHEL for years now. This patch automatically adds secondary GPUs to the
master as output sink / offload source making e.g. the use of
slave-outputs just work, with requiring the user to manually run
"xrandr --setprovideroutputsource" before he can hookup an external
monitor to his hybrid graphics laptop.
There is one problem with this patch, which is why it was not upstreamed
before. What to do when a secondary GPU gets detected really is a policy
decission (e.g. one may want to autobind PCI GPUs but not USB ones) and
as such should be under control of the Desktop Environment.
Unconditionally adding autobinding support to the xserver will result
in races between the DE dealing with the hotplug of a secondary GPU
and the server itself dealing with it.
However we've waited for years for any Desktop Environments to actually
start doing some sort of autoconfiguration of secondary GPUs and there
is still not a single DE dealing with this, so I believe that it is
time to upstream this now.
To avoid potential future problems if any DEs get support for doing
secondary GPU configuration themselves, the new autobind functionality
is made optional. Since no DEs currently support doing this themselves it
is enabled by default. When DEs grow support for doing this themselves
they can disable the servers autobinding through the servers cmdline or a
xorg.conf snippet.
Signed-off-by: Dave Airlie <airlied@gmail.com>
[hdegoede@redhat.com: Make configurable, fix with nvidia, submit upstream]
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
---
Changes in v2:
-Make the default enabled instead of installing a xorg.conf
snippet which enables it unconditionally
Changes in v3:
-Handle GPUScreen autoconfig in randr/rrprovider.c, looking at
rrScrPriv->provider, rather then in hw/xfree86/modes/xf86Crtc.c
looking at xf86CrtcConfig->provider. This fixes the autoconfig not
working with the nvidia binary driver
The miPointerSpriteFunc swcursor code expects there to only be a single
framebuffer and when the cursor moves it will undo the damage of the
previous draw, potentially overwriting what ever is there in a new
framebuffer installed after a flip.
This leads to all kind of artifacts, so we need to disable pageflipping
when a swcursor is used.
The code for this has shamelessly been copied from the xf86-video-amdgpu
code.
Fixes: https://gitlab.freedesktop.org/xorg/xserver/issues/828
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Fix the following compiler warning:
drmmode_display.c: In function ‘drmmode_create_bo’:
drmmode_display.c:1019:9: warning: ISO C90 forbids mixed declarations and code [
1019 | uint32_t num_modifiers;
| ^~~~~~~~
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
When the pixmapPrivateKeyRec was moved from a global to being embedded
inside the drmmode_rec these 2 where missed, clean them up.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
The modesetting driver (which now often is used with Intel GPUs),
relies on DRI2ScreenInit() to setup the DRI and VDPAU driver names.
Before this commit it would always assign the same name to the 2 names,
but the VDPAU driver for i965 GPUs should be va_gl.
This commit adds a special case for the i965 case, replacing the
VDPAU driver name with "va_gl" if the GPU is using the i965 driver
for DRI.
Note this commit adds a FIXME comment for a related memory leak, that leak
was already present and fixing it falls outside of the scope of this commit.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1413733
Cc: kwizart@gmail.com
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This script generates a header that has a comment containing the build path for
no real reason. As this source can end up deployed on targets in debug packages
this means there is both potentially sensitive information leakage about the
build environment, and a source of change for reproducible builds.
Stop trying to link to a shared library we no longer build
Fixes: commit c1703cdf3b - "xfree86: Link fb statically"
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
"bool" conflicts with C++ (meh) and stdbool.h (ngh alright fine). This
is a driver-visible change and will likely break the build for mach64,
but it can be fixed by simply using xf86ReturnOptValBool like every
other driver.
Signed-off-by: Adam Jackson <ajax@redhat.com>
<sys/io.h> on ARM hasn't worked for a long, long time, so it was removed
it from glibc upstream.
Remove the include to avoid a compilation failure on ARM with glibc.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Closes: https://gitlab.freedesktop.org/xorg/xserver/issues/840
Don't link against fb, it's the driver's responsibility to load that
first. Underlinking like this is unpleasant but this matches what
autotools does.
Fixes: xorg/xserver#540
Copied from Mesa with no modifications.
This update brings in a significant number of new platform ID's.
Syncs with mesa up to commit e334a595e ("intel/icl: Add new ICL
PCI-IDs").
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Promote the generated file containing the date & time build was
configured to top-level.
Rename it from xf86Build.h to buildDateTIme.h.
Use it as well in XQuartz, stringize BUILD_DATE when needed.
If SYSTEMD_LOGIND is not defined, systemd_logind_take_fd is defined as a
macro evaluating to -1 by systemd-logind.h, leaving paused
uninitialized.
../hw/xfree86/common/xf86Xinput.c: In function ‘xf86NewInputDevice’:
../hw/xfree86/common/xf86Xinput.c:919:16: warning: ‘paused’ may be used uninitialized in this function [-Wmaybe-uninitialized]
../hw/xfree86/common/xf86Xinput.c:877:10: note: ‘paused’ was declared here
../hw/xfree86/os-support/stub/stub_init.c: In function ‘xf86OSInputThreadInit’:
../hw/xfree86/os-support/stub/stub_init.c:29:1: warning: old-style function definition [-Wold-style-definition]
Drivers may need to loop over the allocated screens during PreInit, for example
to consolidate xorg.conf options that apply to a GPU device as a whole.
Currently, this works for protocol screens becuase x86Screens is exported, but
does not work for GPU screens.
Export xf86GPUScreens and xf86NumGPUScreens for consistency with xf86Screens and
xf86NumScreens.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
If the user sets Option "Enable" "TRUE" for a monitor, the X
server will connect the connector a crtc but tell the user it
is disconnected.
However the user in this case is mutter, when it gets it's view
of the output configuration it sees the output is disconnected
and never sets it up again, which seems like the right thing to do.
If we let the user enable a monitor, lets just set it as always
connected.
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
systemd-logind since version 234 (released 2017-07-12) supports being
restarted without losing state [1]. From the systemd NEWS file [2]:
* systemd-logind may now be restarted without losing state. It stores
the file descriptors for devices it manages in the system manager
using the FDSTORE= mechanism. Please note that further changes in
other components may be required to make use of this (for example
Xorg has code to listen for stops of systemd-logind and terminate
itself when logind is stopped or restarted, in order to avoid using
stale file descriptors for graphical devices, which is now
counterproductive and must be reverted in order for restarts of
systemd-logind to be safe. See
https://cgit.freedesktop.org/xorg/xserver/commit/?id=dc48bd653c7e101.)
This reverts commit dc48bd653c.
Closes: #531
[1] https://github.com/systemd/systemd/pull/5600
[2] 9f09a95a7e
Normally, the X server infers the initial screen size based on any
connected outputs. However, if no outputs are connected, the X server
picks a default screen size of 1024 x 768. This option overrides the
default screen size to use when no outputs are connected. In contrast
to the "Virtual" Display SubSection entry, which applies unconditionally,
"NoOutputInitialSize" is only used if no outputs are detected when the
X server starts.
Parse this option in the new exported helper function
xf86AssignNoOutputInitialSize(), so that other XFree86 loadable drivers
can use it, even if they don't use xf86InitialConfiguration().
Signed-off-by: Andy Ritger <aritger@nvidia.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Since the Solaris kernel tracks IOPL per thread, and doesn't inherit
raised IOPL levels when creating a new thread, we need to turn it on
in the input thread for input drivers like vmmouse that need register
access to work correctly.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Keeping track of kernel state in user space doesn't buy us anything,
and introduces bugs, as we were keeping global state but the Solaris
kernel tracks IOPL per thread.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
The VGA arbiter controls the PCI bus' routing of legacy VGA resources,
specifically the video memory aperture at 0xa0000-0xb0000 (640k should
be etc.) and a handful of I/O ports. Since 128k is far too small for a
real framebuffer these days, every driver instead maps a linear version
of VRAM through the PCI BAR. And no DRI2 drivers ever need I/O port
access, because all operations they might be used for (legacy VGA CRTC
setup, mostly) happen on the kernel side.
In other words, this just works, and we can stop breaking it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
A user of Adélie Linux reported that modesetting wasn't working properly on
their Intel i7-9700K-integrated UHD 630 GPU. Xorg.0.log showed:
[ 131.902] (EE) modeset(0): [DRI2] No driver mapping found for PCI device 0x8086 / 0x3e98
[ 131.902] (EE) modeset(0): Failed to initialize the DRI2 extension.
Indeed, that PCI ID is missing from i965_pci_ids. Adding it fixed the issue
and allowed the system to work with i965_dri under modesetting.
All of the null checks here are redundant, you can't get to those paths
unless RANDR's already been initialized. Delete them, and remove the
pointer too.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
If the driver calls xf86HandleColormaps, CMapChangeGamma updates the HW
gamma LUT of all CRTCs via xf86RandR12LoadPalette. However,
xf86RandR12ChangeGamma was then clobbering the gamma LUT of the RandR
1.2 compatibility output's CRTC with the gamma curves computed from the
screen's global gamma values.
Fix this by bailing if xf86RandR12LoadPalette is installed.
Fixes: 02ff0a5d7e "xf86RandR12: Fix XF86VidModeSetGamma triggering a
BadImplementation error"
Noticed when porting this logic to xf86-video-nouveau, and valgrind
complained about conditional jump based on uninitialized data.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Gitlab very kindly exposes the details of the git commit message (among
much else) in the environment. Unfortunately, piglit tries to handle the
environment in non-UTF8-safe ways, which means if the top-of-tree commit
mentions non-ASCII characters (say, in the author's name) then all the
tests fail and so does the pipeline.
Fortunately none of those variables are things our piglit invocation
needs. Since I've failed to rebuild the docker image as yet, just clear
the likely variables from the environment before running piglit.
This-makes-me: ☹
Believe it or not, somehow we've never done this in legacy mode! We
currently simply change the DPMS property on the CRTC's output's
respective DRM connector, but this means that we're just setting the
CRTC as inactive-not disabled. From the perspective of the kernel, this
means that any shared resources used by the CRTC are still in use.
This can cause problems for drivers that are not yet fully atomic,
despite using the atomic helpers internally. For instance: if CRTC-1 and
CRTC-2 are still enabled and use shared resources within the kernel (an
MST topology, for example), and then userspace tries to go enable CRTC-3
on the same topology this might suddenly fail if CRTC-3 needs the shared
resources CRTC-1 and CRTC-2 are using. While I don't know of any
situations in the mainline kernel that actually trigger this, future
plans for reworking the atomic check of MST drivers are absolutely
going to make this into a real issue (they already are in my WIP
branches for the kernel).
So: actually do the right thing here and disable CRTCs when they're not
going to be used anymore, even in legacy mode.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Some Broadcom set-top-box boards have PCI busses, but the GPU is still
probed through DT. We would dereference a null busid here in that
case.
Signed-off-by: Eric Anholt <eric@anholt.net>
Lifted from vfb. xfree86 had almost the same thing but unparameterized,
port it to the vfb style.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Could cause privilege elevation and/or arbitrary files overwrite, when
the X server is running with elevated privileges (ie when Xorg is
installed with the setuid bit set and started by a non-root user).
CVE-2018-14665
Issue reported by Narendra Shinde and Red Hat.
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
At the point where xf86BusProbe runs we haven't yet taken our own VT,
which means we can't perform drm "master" operations on the device. This
is tragic, because we need master to fish the bus id string out of the
kernel, which we can only do after drmSetInterfaceVersion, which for
some reason stores that string on the device not the file handle and
thus needs master access.
Fortunately we know the format of the busid string, and it happens to
almost be the same as the ID_PATH variable from udev. Use that instead
and stop calling drmSetInterfaceVersion.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
If the X server is terminated while its VT is not active, it should
not change the current VT.
v2: Query current state in xf86CloseConsole using VT_GETSTATE instead of
keeping track in xf86VTEnter/xf86VTLeave/etc.
This hasn't done anything besides return TRUE in a long long time.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
These are so close to identical that most DDXes implement one in terms
of the other. All the relevant cases can be distinguished by the error
code, so merge the functions together to make things simpler.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
No supported driver supports 1bpp anymore, nor has in a very long time.
This option only worked with vgahw anyway.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I don't think this is useful information to have in the log, and it's
a bunch of autotools and meson logic to produce it.
Signed-off-by: Eric Anholt <eric@anholt.net>
60ec8ead broke the autotools build:
sdksyms.o:(.data+0x58): undefined reference to `InitConnectionLimits'
sdksyms.o:(.data+0x2ec8): undefined reference to `xf86ServerName'
collect2: error: ld returned 1 exit status
Makefile:811: recipe for target 'Xorg' failed
Likewise 3a4d7c79 for InitConnectionLimits.
Signed-off-by: Adam Jackson <ajax@redhat.com>
If it's really this important we should just do it and not complain. We
never do it so it must not matter.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I'm sure printing the address of function pointers in modules you'd
loaded might have made sense back when we rolled our own dlopen, but we
got better.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The old code would not in fact validate the option value, though it
might complain about it in the log. It also didn't let you set some
legal values that the -maxclients command line option would.
Signed-off-by: Adam Jackson <ajax@redhat.com>
DGAShutdown() walks every screen and attempts to reset the mode. That's
maybe a reasonable thing to do, although the explicit loop is certainly
a bad smell.
In ddxGiveUp it's called after we've torn down the vga arbiter - and in
fact most of the rest of screen state - which is... very very bad. The
other place it's called is from the Control-Alt-BackSpace handler, where
we don't even attempt to do vga arb setup, and where in any case we're
going to escape the main loop eventually anyway.
Move all that cleanup work inside DGACloseScreen. This means it happens
earlier in server teardown than previously, but not in a way you're ever
going to be upset about.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The X will be crashed on the system with other DDX driver,
such as amdgpu.
show the log like:
randr: falling back to unsynchronized pixmap sharing
(EE)
(EE) Backtrace:
(EE) 0: /usr/lib/xorg/Xorg (xorg_backtrace+0x4e)
(EE) 1: /usr/lib/xorg/Xorg (0x55cb0151a000+0x1b5ce9)
(EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f1587a1d000+0x11390)
(EE)
(EE) Segmentation fault at address 0x0
(EE)
The issue is that modesetting as the master, and amdgpu as the slave.
Thus, when the master attempts to access pSlavePixPriv in ms_dirty_update(),
problems result due to the fact that it's accessing AMD's 'ppriv' using the
modesetting structure definition.
Apart from fixing crash issue, the patch fix other issue in master interface
in which driver should refer to master pixmap.
Signed-off-by: Jim Qu <Jim.Qu@amd.com>
Reviewed-by: Alex Goins <agoins@nvidia.com>
This makes us match the featureset of autotools, and also fixes the
non-Linux default value to match.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
I don't have a BSD to test on, but this should do the same as what
autotools did.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
We already have pm_noop.c being built most of the time for the
no-OS-PM case, so just switch to always using it.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Instead of having every video driver loop over any pending leases to
free them during CloseScreen, do this up in the DIX layer by
terminating leases when a leased CRTC or Output is destroyed and
(just to make sure), also terminating leases in RRCloseScreen. The
latter should "never" get invoked as any lease should be associated
with a resource which was destroyed.
This is required as by the time the driver's CloseScreen function is
invoked, we've already freed all of the DIX randr structures and no
longer have any way to reference the leases
Signed-off-by: Keith Packard <keithp@keithp.com>
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=106960
Cc: Thomas Hellstrom <thellstrom@vmware.com>
The recent rewrite of modesetting driver broke the 24bpp support.
As typically found on cirrus KMS, it leads to a blank screen, spewing
the error like:
failed to add fb -22
(EE) modeset(0): failed to set mode: Invalid argument
The culript is that the wrong bpp value of the front buffer is passed
to drmModeAddFB(). Fix it by replacing with the back buffer bpp,
drmmode->kbpp.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Tested-by: Stefan Dirsch <sndirsch@suse.de>
Reviewed-by: Adam Jackson <ajax@redhat.com>
When setting DefaultDepth to 16 in the Screen section, the current
code requests a 32 bpp framebuffer, however the X-Server seems to
assumes 16 bpp.
Fixes commit 21217d0216 ("modesetting: Implement 32->24 bpp
conversion in shadow update")
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Stefan Agner <stefan@agner.ch>
If we're using atomic modesetting, then we're also using universal
planes, and so the lease we create needs to include the plane.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
We don't want universal_planes unless we're using atomic APIs for
modesetting, and the kernel already enables universal_planes
automatically when atomic is enabled.
If we enable universal_planes when we're not using atomic, then we
won't have selected a plane for each crtc, and this will break lease
creation which requires planes for each output when universal_planes
is enabled.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
The DIX crtc and output structures are freed when their resources are
destroyed, which happens before CloseScreen is called. As a result, we
know these pointers are invalid and referencing them during any of the
remaining CloseScreen sequence will be bad.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Cc: thellstrom@vmware.com
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=106960
This lets an application open a suitable DRM device and pass the file
descriptor to the mode setting driver through an X server command line
option, '-masterfd'.
There's a companion application, xlease, which creates a DRM master by
leasing an output from another X server. That is available at
git clone git://people.freedesktop.org/~keithp/xlease
v2:
Always print usage, but note that it can't be used if
setuid/gid
Suggested-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
So, this did actually work on older kernels at one point in time,
however it seems that this working was a result of some of the Linux
kernel's atomic modesetting helpers not preserving the CRTC's enabled
state in the right spots. This was fixed in:
846c7dfc1193 ("drm/atomic: Try to preserve the crtc enabled state in drm_atomic_remove_fb, v2")
As a result, atomic commits which simply disassociate a DRM connector
with it's CRTC while leaving the CRTC in an enabled state aren't enough
to disable the CRTC, and result in the atomic commit failing. This
currently can cause issues with MST hotplugging where X will end up
failing to disable the MST outputs after they've left the system. A
simple reproducer:
- Start up Xorg
- Connect an MST hub with displays connected to it
- Remove the hub
- Now there should be CRTCs stuck on the orphaned MST connectors, and X
won't be able to reclaim them.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Cc: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
drmmode_shadow_allocate() still uses drmModeAddFB() which may fail if
the format is not as expected, preventing from using a rotated output.
Change it to use the new function drmmode_bo_import() which takes care
of calling the drmModeAddFB2() API.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106715
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Tested-by: Tomas Pelka <tpelka@redhat.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Change the 'chown' statement in Makefile.am to use the numeric UID
of superuser instead of relying on the name 'root'.
Bugzilla: https://bugs.freedesktop.org/27726
Signed-off-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Michał Górny <gentoo@mgorny.alt.pl>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
In commit 9db2af6f75 (xfree86: Remove xf86{Map,Unmap}VidMem) we
somehow stopped exporting xf86{Read,Write}Mmio{8,16,32}. Since the
function pointer indirection was intended to support dense vs sparse and
sparse support is now gone, we can just make the functions static inline
in compiler.h and avoid all of this.
Bugzilla: https://bugs.gentoo.org/548906
Tested-by: Christopher May-Townsend <chris@maytownsend.co.uk>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Matt Turner <mattst88@gmail.com>
We don't want DRM file descriptors to leak to child processes.
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
It was passing O_CLOEXEC as permission bits instead of as a flag.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Fixes DRI2 client driver name mapping for newer AMD GPUs with the
modesetting driver, allowing the DRI2 extension to initialize.
Fixes using GL with the modesetting driver for me.
Seems we were way behind on this one, time to look into something
more scalable?
Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Modifiers support needs gbm as a dependency. Without setting the dependency
included headers are not found reliably and the build might fail if the
headers are not placed in the default system include paths.
Signed-off-by: Roman Gilg <subdiff@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The old 32-Bit wraparound handling didn't actually work, due to some
integer casting bug, and the mapping was ill equipped to deal with input
from the new true 64-bit GetCrtcSequence/QueueCrtcSequence api's
introduced in Linux 4.15.
For 32-Bit truncated input from pageflip events and old vblank events
and old drmWaitVblank ioctl, implement new wraparound handling, which
also allows to deal with wraparound in the other direction, e.g., if a
32-Bit truncated sequence value is passed in, whose true 64-Bit
in-kernel hw value is within 2^30 counts of the previous processed
value, but whose 32-bit truncated sequence value happens to lie just
above or below a 2^32 boundary, iow. one of the two values 'sequence'
vs. 'msc_prev' lies above a 2^32 border, the other one below it.
The method is directly translated from Mesa's proven implementation of
the INTEL_swap_events extension, where a true underlying 64-Bit wide
swapbuffers count (SBC) needs to get reconstructed from a 32-Bit LSB
truncated SBC transported over the X11 protocol wire. Same conditions
apply, ie. successive true 64-Bit SBC values are close to each other,
but don't always get received in strictly monotonically increasing
order. See Mesa commit cc5ddd584d17abd422ae4d8e83805969485740d9 ("glx:
Handle out-of-sequence swap completion events correctly. (v2)") for
explanation.
Additionally add a separate path for true 64-bit msc input originating
from Linux 4.15+ drmCrtcGetSequence/QueueSequence ioctl's and
corresponding 64-bit vblank events. True 64-bit msc's don't need
remapping and must be passed through.
As a reliability bonus, they are also used here to update the tracking
values msc_prev and ms_high with perfect 64-Bit ground truth as baseline
for mapping msc from pageflip completion events, because pageflip events
are always 32-bit wide, even when the new kernel api's are used. Because
each pageflip(-event) is always preceeded close in time (and vblank
count) by a drmCrtcQueueSequence queued event or drmCrtcGetSequence
query as part of DRI2 or DRI3+Present swap scheduling, we can be certain
that each pageflip event will get its truncated 32-bit msc remapped
reliably to the true 64-bit msc of flip completion whenever the sequence
api is available, ie. on Linux 4.15 or later.
Note: In principle at least the 32-bit mapping path could also be
backported to earlier server branches, as this seems to be broken for at
least server 1.16 to 1.19.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Cc: Keith Packard <keithp@keithp.com>
Cc: Michel Dänzer <michel.daenzer@amd.com>
The function is ported from intel-ddx uxa backend around 2013, where its
stated purpose was to apply a vblank_offset to msc values to correct for
problems with those kernel provided msc values. Some (somewhat magic and
puzzling to myself) heuristic tried to guess if provided values were
unreasonable and tried to adapt the corrective vblank_offset to account
for that.
Except: It wasn't applied to kernel provided msc values, but the values
delivered by clients via DRI2 or Present, so valid client targetmsc
values, e.g., requesting a vblank event > 1000 vblanks in the future,
triggered the offset correction in arbitrarily wrong ways, leading to
wrong msc values being returned and thereby vblank events queued to the
kernel for the wrong time. This causes glXSwapBuffersMscOML and
glXWaitForMscOML to swap / return immediately whenever a swap/wait in >
1000 vblanks is requested.
The original code was also written to only deal with 32 bit mscs, but
server 1.20 modesetting ddx can now use new Linux 4.15+ kernel vblank
api to process true 64 bit msc's, which may confuse the heuristic even
more due to 32 bit integer truncation/wrapping.
This code caused various problems in the intel-ddx in the past since
year 2013, and was removed there in 2015 by Chris Wilson in commit
42ebe2ef9646be5c4586868cf332b4cd79bb4618:
" uxa: Remove the filtering of bogus Present MSC values
If the intention was to filter the return values from the kernel, the
filtering would have been applied to the kernel values and not to the
incoming values from Present. This filtering introduces crazy integer
promotion and truncation bugs all because Present feeds garbage into its
vblank requests.
"
Indeed, i found a Mesa bug yesterday which can cause Mesa's
PresentPixmap request to spuriously feed garbage targetMSC's into the
driver under some conditions. However, while other video drivers seem to
cope relatively well with that, modesetting ddx causes KDE-5's
plasmashell to lock up badly quite frequently, and my suspicion is that
the code removed in this commit is one major source of the extra
fragility.
Also my own tests fail for any swap scheduled more than 1000 vblanks
into the future, which is not uncommon for some scientific applications.
Iow. modesetting's swap scheduling seems to be more robust without this
function afaics.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Keith Packard <keithp@keithp.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Copied from Mesa with no modifications.
Gives us Cofeelake platform names updates and sync on Kaby Lake,
Ice Lake PCI IDs.
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
GBM objects were never destroyed after looking for format and
modifier compatibility when deciding whether flipping or copying
a presented pixmap.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106106
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Kludge sdksyms.c generator to not fail on GetClientPid.
It returns pid_t which on NetBSD is #define pid_t __pid_t
This slightly alters the GCC preprocessor output which this fragile
code could not deal with when using GCC 5+
Signed-off-by: Adam Jackson <ajax@redhat.com>
Use the DRM_CAP_ADDFB2_MODIFIERS query to make sure the kms
driver supports modifiers in the addfb2 ioctl, and fall back
to addfb ioctl without modifiers if modifiers are unsupported.
E.g., as of Linux 4.17, nouveau-kms so far does not suppport
modifiers and gets angry if drmModeAddFB2WithModifiers() is
called (-> failure to set a video mode -> blank screen), but
Mesa's nvc0+ gallium driver causes gbm_bo_get_modifier() to
return a valid modifier by translating the default tiling of
bo's created via gbm_bo_create() into a modifier other than
DRM_FORMAT_MOD_INVALID (see Mesa's nvc0_miptree_get_modifier()).
Testing for != DRM_FORMAT_MOD_INVALID is apparently not
sufficient for safe use of drmModeAddFB2WithModifiers.
Bonus: Handle potential failure of populate_format_modifiers().
The required DRM_CAP is defined since libdrm v2.4.65, and we
require v2.4.89+ for the server, so we can use it unconditionally.
Tested on intel-kms, radeon-kms, nouveau-kms. Fixes failure on
NVidia Pascal.
Fixes: 2f807c2324 ("modesetting: Add support for multi-plane pixmaps when page-flipping")
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Daniel Stone <daniels@collabora.com>
Cc: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
ms_queue_vblank() returns false on failure.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Frank Binns <frank.binns@imgtec.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Apparently on NetBSD we can hit failures like this:
sdksyms.c:1773:15: error: expected expression before ',' token
(void *) &, /* ../../dri3/dri3.h:110 */
I've been unable to reproduce that locally (even in a NetBSD vm), but
an obvious workaround might be to just notice empty symbol names and
ignore them rather than emit invalid C code.
Tested-by: Thomas Klausner <wiz@netbsd.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
This was never merged upstream. It was a Fedora kernel patch but dropped from
Fedora in 2013 with kernel 3.12.
The reason for the KDSKBMUTE proposal has been fixed in systemd in Feb 2013,
systemd 198.
https://lists.freedesktop.org/archives/systemd-devel/2013-February/008795.html
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
../hw/xfree86/utils/gtf/gtf.c: In function ‘print_fb_mode’:
../hw/xfree86/utils/gtf/gtf.c:241:50: warning: cast from function call of type ‘double’ to non-matching type ‘int’ [-Wbad-function-cast]
printf(" timings %d %d %d %d %d %d %d\n", (int) rint(1000000.0 / m->pclk), /* pixclock in picoseconds */
That's pretty nitpicky of you, gcc, but at least it's easy to fix.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Acked-by: Keith Packard <keithp@keithp.com>
We would fail to get the FB ID if it wasn't already imported, since we
were checking to see if the pointer was NULL (it never was) rather than
if the content of the pointer was 0.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reported-by: Olivier Fourdan <ofourdan@redhat.com>
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
drmmode_crtc_set_mode has a loop nested inside another loop, where both
of them were using 'i' as the loop iterator. Rename it to avoid an
infinite loop.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reported-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-and-Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Non-atomic kms drivers like radeon-kms (or nouveau-kms with
default setting of "atomic ioctl disabled") don't export
any formats, so num_formats == 0.
Some atomic drivers (nouveau-kms with boot param nouveau.atomic=1,
or intel-kms on, e.g., Linux 4.13) expose num_formats == 0, or
don't expose any modifiers, so num_modifiers == 0.
Let the drmmode_is_format_supported() check pass in these cases
to allow page flipping, as it works just fine.
Tested on NV-96 for nouveau, HD-5770 for radeon, Intel Ivybridge
with Linux 4.13 and drm-next to fix page flipping.
Fixes: 9d147305b4 ("modesetting: Check if buffer format is supported when flipping")
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
We need to make sure that the atomic commit are consistent
or else the kernel will reject it. For example, when moving
a CRTC from one output to another one, the first output CRTC_ID
property needs to be reset. Also if the second output was using
another CRTC beforehands, it needs to be disabled to avoid an
inconsistent state.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Tested-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
CRTCs and outputs needs to be enabled/disabled when the current
DPMS mode is changed. We also try to do it in an atomic commit
when possible.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Tested-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Keep track of whether or not we fed modifiers into GBM when we allocated
a BO. We'll use this later inside Glamor, to reallocate buffer storage
if we allocate buffer storage using modifiers, and a non-modifier-aware
client requests an export of that pixmap.
This makes it possible to run a compositing manager on an old GLX/EGL
stack on top of an X server which allocates internal buffer storage
using exotic modifiers from modifier-aware GBM/EGL/KMS.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reported-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Keep track of whether or not we fed modifiers into GBM when we allocated
a BO. We'll use this later inside Glamor, to reallocate buffer storage
if we allocate buffer storage using modifiers, and a non-modifier-aware
client requests an export of that pixmap.
This makes it possible to run a compositing manager on an old GLX/EGL
stack on top of an X server which allocates internal buffer storage
using exotic modifiers from modifier-aware GBM/EGL/KMS.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reported-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The newline before the protocl version got lost in commit
6cbefc3e0a. Prior to that commit, the
release date printed a newline at the end:
X.Org X Server 1.19.6
Release Date: 2017-12-20
X Protocol Version 11, Revision 0
Build Operating System: Linux 4.14.12-1-ARCH x86_64
Now, that string gets run together with the version:
X.Org X Server 1.19.99.903 (1.20.0 RC 3)X Protocol Version 11, Revision 0
Build Operating System: Linux
Since the version string printing has a variety of #ifdefs in it, just
add the newline to the begining of the protocol version string.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
The framebuffer can include multiple CRTCs in multi-monitors
setup. So we shouldn't use the buffer size but the CRTC size
instead. Rotated displays are shadowed, so we don't need to
worry about it there.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Fixes a regression caused by modifiers support. For some hw to
continue working even if not supporting ARGB8888 and ARGB2101010
formats, we assume that all imported BOs are opaque.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Tested-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Otherwise the same content is shown on all outputs.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Daniel Stone <daniels@collabora.comM>
The respective ISA functions were dropped back in 2008
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Install missing headers to the SDK directory to allow external modules
to properly build against the SDK. After this commit, the list of files
installed in the SDK include directory is the same as the list of files
installed by the autotools-based build.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
drmmode_output_dpms is called especially with !output->crtc found in
xf86DisableUnusedFunctions so we have to guard for it, else the server
segfaults:
0 0x00007fdc1706054b in drmmode_output_dpms (output=0x55e15243c210, mode=3) at
drmmode_display.c:2243
1 0x000055e1500b6873 in xf86DisableUnusedFunctions (pScrn=0x55e152133f00) at
xf86Crtc.c:3021
2 0x000055e1500be940 in xf86RandR12CrtcSet (pScreen=<optimized out>,
randr_crtc=0x55e1524b2b90, randr_mode=0x0, x=0, y=0, rotation=<optimized out>,
num_randr_outputs=0, randr_outputs=0x0) at xf86RandR12.c:1244
3 0x000055e1500fa1c2 in RRCrtcSet (crtc=<optimized out>, mode=0x0, x=0, y=0,
rotation=rotation@entry=1, numOutputs=numOutputs@entry=0, outputs=0x0) at
rrcrtc.c:763
4 0x000055e1500fba9e in ProcRRSetCrtcConfig (client=0x55e152bfae50) at
rrcrtc.c:1390
5 0x000055e150044008 in Dispatch () at dispatch.c:478
6 0x000055e150047ff8 in dix_main (argc=13, argv=0x7ffc68561038,
envp=<optimized out>) at main.c:276
7 0x00007fdc1a0c6a87 in __libc_start_main () at /lib64/libc.so.6
8 0x000055e150031d0a in _start () at ../sysdeps/x86_64/start.S:120
Fixes: ba0c75177 ("modesetting: Fix up some XXX from removing GLAMOR_HAS_DRM_*")
Signed-off-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Reviewed-by: Adam Jackson <ajax@redhat.com>
... for xfree86, at least for now. Things appear to work for Xwayland
but not yet for modesetting. Hopefully we can fix that before 1.20 but
in the meantime this makes testing both paths easier than a rebuild.
Signed-off-by: Adam Jackson <ajax@redhat.com>
We don't use flink in the GetFB import path anymore, as we do an
FD-based import instead.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
A cosmetic change for automake (though we have to replicate some of
xorg-macros.m4 in manpages.am now), but meson's configure_file() wants
@-delimited strings.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
The check for "no modifier specified" in drmmode_is_format_supported()
should check for DRM_FORMAT_MOD_INVALID, not for zero, as zero actually
means DRM_FORMAT_MOD_LINEAR.
This allows page-flipping again when appropriate, as
tested under nouveau and ati drivers.
Fixes: 9d147305b4 ("modesetting: Check if buffer format is supported when flipping")
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
[... but leave it defined and exported, since we're ABI-frozen - ajax]
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Tested-by: Ben Crocker <bcrocker@redhat.com>
restore abi
Having different types of code all trying to check for elevated privileges
is a bad idea. This implementation is the most thorough one.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Tested-by: Ben Crocker <bcrocker@redhat.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
All the macros are available in the libdrm that we depend on.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
We already require libdrm 2.4.89 which provides the definition plus
guarding kernel UABI like that is generally a bad idea.
See previous commit for details why :-)
Cc: Keith Packard <keithp@keithp.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The macro was available in libdrm for ages. Furthermore having a guard
like this is a very bad idea.
Building on an old server will result in a missing run-time functionality.
Since it's UABI one can use a local fallback, old kernels will return
-EINVAL and the fallback path will kick in.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
With earlier commit the required version was bumped to 2.4.89, thus the
guards always evaluate to true.
Fixes: e4e3447603 ("Add RandR leases with modesetting driver support
[v6]")
Cc: Keith Packard <keithp@keithp.com>
Cc: Daniel Stone <daniels@collabora.com>
Cc: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This reverts commit 8c455db0eb.
Since xf86platformBus.h is only included when XSERVER_PLATFORM_BUS is
defined, and configure.ac only defines that on systems with udev, this
commit breaks the build on non-udev systems like Solaris.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Copied from Mesa with no modifications.
Gives us Geminilake and Kaby Lake platform names updates and
sync on Coffee Lake PCI IDs.
Cc: Timo Aaltonen <timo.aaltonen@canonical.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Implement function added in DRI3 v1.1.
A newest version of libepoxy (>= 1.4.4) is required as earlier
versions use a problematic version of Khronos
EXT_image_dma_buf_import_modifiers spec.
v4: Only send scanout-supported modifiers if flipping is possible
v5: Fix memory corruption in XWayland (uninitialized pointer)
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Add support for 'check_flip2' so that the present core can know
why it is impossible to flip in that scenario. The core can then
let know the client that the buffer format/modifier is suboptimal.
v2: No longer need to implement 'check_flip'
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Use most optimal buffer format (e.g. tiled/compressed) available
for scanout.
v2: Don't use multi-plane modifier to create scanout buffer
v3: Add flag to retrieve modifiers set from enabled CRTCs only
v4: Fix uses when GBM/EGL driver doesn't support modifiers
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Retrieve IN_FORMATS property from the plane. It gives the
allowed formats and modifiers for BO allocation.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
To make sure we also use the same primary plane and to avoid
mixing uses of two APIs, it is better to always use the atomic
modesetting API when possible.
v2: Don't use mode_output->connector_id
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This allows the uses of CCS compressed or tiled pixmaps as BOs when
page-flipping.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
In order to flip between compressed and uncompressed buffers -
something drmModePageFlip explicitly bans us from doing - we need
to port use the atomic modesetting API. It's only 'fake' atomic
though given we still commit for each CRTC separately and
CRTC and connector properties are not set with the atomic API.
The helper functions to retrieve DRM properties have been borrowed
from Weston.
Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This adds support for RandR CRTC/Output leases through the modesetting
driver, creating a lease using new kernel infrastructure and returning
that to a client through an fd which will have access to only those
resources.
v2: Restore CRTC mode when leases terminate
When a lease terminates for a crtc we have saved data for, go
ahead and restore the saved mode.
v3: Report RR_Rotate_0 rotations for leased crtcs.
Ignore leased CRTCs when selecting screen size.
Stop leasing encoders, the kernel doesn't do that anymore.
Turn off crtc->enabled while leased so that modesetting
ignores them.
Check lease status before calling any driver mode functions
When starting a lease, mark leased CRTCs as disabled and hide
their cursors. Also, check to see if there are other
non-leased CRTCs which are driving leased Outputs and mark
them as disabled as well. Sometimes an application will lease
an idle crtc instead of the one already associated with the
leased output.
When terminating a lease, reset any CRTCs which are driving
outputs that are no longer leased so that they start working
again.
This required splitting the DIX level lease termination code
into two pieces, one to remove the lease from the system
(RRLeaseTerminated) and a new function that frees the lease
data structure (RRLeaseFree).
v4: Report RR_Rotate_0 rotation for leased crtcs.
v5: Terminate all leases on server reset.
Leases hang around after the associated client exits so that
the client doesn't need to occupy an X server client slot and
consume a file descriptor once it has gotten the output
resources necessary.
Any leases still hanging around when the X server resets or
shuts down need to be cleaned up by calling the kernel to
terminate the lease and freeing any DIX structures.
Note that we cannot simply use the existing
drmmode_terminate_lease function on each lease as that wants
to also reset the video mode, and during server shut down that
modesetting: Validate leases on VT enter
The kernel doesn't allow any master ioctls to run when another
VT is active, including simple things like listing the active
leases. To deal with that, we check the list of leases
whenever the X server VT is activated.
xfree86: hide disabled cursors when resetting after lease termination
The lessee may well have played with cursors and left one
active on our screen. Just tell the kernel to turn it off.
v6: Add meson build infrastructure
[Also bumped libdrm requirement - ajax]
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This lets a DRM client map between X outputs and kernel connectors.
v2:
Change CONNECTOR_ID to enum -- Adam Jackson <ajax@nwnk.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@nwnk.net>
Save any value of the kernel non-desktop property in the xf86Output
structure to avoid non-desktop outputs in the default configuration.
[Also bump randrproto requirement to a version that defines
RR_PROPERTY_NON_DESKTOP - ajax]
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@nwnk.net>
At startup, we want to ignore non-desktop monitors unless we don't
find any desktop monitors. Because there are no DIX RandR resources
allocated, let the driver store this information in a new field in the
xf86Output structure and then use that value to help decide whether to
include an output as part of the default configuration.
v2:
Suggested-by: Michel Dänzer <michel@daenzer.net>
Bump XF86_CRTC_VERSION from 7 to 8. This will let out-of-tree
drivers know whether this field is available.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Adam Jackson <ajax@nwnk.net>
glamor now supports depth 30, so allow use of it.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
This retains old behavior for depths <= 24, but allows gamma
table and colormap updates to work properly at depth 30.
This needs the xf86Randr12CrtcComputeGamma() fix for depth 30
from a previous commit to work. Otherwise the server will work,
but gamma table updates will silently fail, iow. the server
would always run with a default identity gamma lut.
v2: Simplify as proposed by Michel.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk> (v1)
At screen depths > 24 bit, the color palettes passed into
xf86Randr12CrtcComputeGamma() can have a larger number of slots
than the crtc's hardware lut. E.g., at depth 30, 1024 palette
slots vs. 256 hw lut slots. This palette size > crtc gamma size
case is not handled yet and leads to silent failure, so gamma
table updates do not happen.
Add a new subsampling path for this case.
This makes lut updates work again, as tested with the xgamma
utility (uses XF86VidMode extension) and some RandR based
gamma ramp animation.
v2: Better resampling when subsampling the palette, as
proposed by Ville. Now reaches the max index of the
palette and deals with non-power-of-two sizes. Thanks.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk> (v1)
Cc: <ville.syrjala@linux.intel.com>
Support x-screens of depth 30, so init doesn't fail.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Antoine Martin <antoine@nagafix.co.uk>
Turns out that the kernel DRM ioctl handling returns EINVAL
instead of ENOTTY if one tries to call the new drmCrtcGetSequence()
or drmCrtcQueueSequence() ioctl's introduced in Linux 4.15 on an
older kernel where they are missing. This causes the fallback code
not to fall back to the old drmWaitVblank() ioctl and thereby
failure of vblank stuff.
E.g., on Linux 4.13, glxgears -info runs unthrottled at 10000 fps
instead of 60 fps. Also breakage of OML_sync_control extension.
Check for errno != EINVAL before setting has_queue_sequence = TRUE.
Additionally in case of supported drmCrtcQueueSequence(), set
has_queue_sequence = TRUE on success, or we might get at
least a temporary failure in ms_queue_vblank().
One slight ambiguity is that we can also get EINVAL if
drm_crtc_vblank_get() fails in the kernel, so if that
happened at first invocation of the new api, we'd fall
back to drmWaitVblank() and then fail there, instead of
failing in the new api, but the end result would be the
same.
Fixes: 44d5f2eb8a ("xf86-video-modesetting: Support new vblank kernel API [v2]")
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Keith Packard <keithp@keithp.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Add the missing arguments to the function signature.
Fixes: e46820fb89 ("miinitext: introduce LoadExtensionList() to replace
over LoadExtension()")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Add a stub for Xnest so it continues to link, but otherwise we support
GLX on every server so there's no need to make every DDX add it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The big change here is MakeCurrent and context tag tracking. We now
delegate context tags entirely to the vnd layer, and simply store a
pointer to the context state as the tag data. If a context is deleted
while it's current, we allocate a fake ID for the context and move the
context state there, so the tag data still points to a real context. As
a result we can stop trying so hard to detach the client from contexts
at disconnect time and just let resource destruction handle it.
Since vnd handles all the MakeCurrent protocol now, our request handlers
for it can just be return BadImplementation. We also remove a bunch of
LEGAL_NEW_RESOURCE, because now by the time we're called vnd has already
allocated its tracking resource on that XID.
v2: Update to match v2 of the vnd import, and remove more redundant work
like request length checks.
v3: Add/remove the XID map from the vendor private thunk, not the
backend. (Kyle Brenneman)
v4: Fix deletion of ghost contexts (Kyle Brenneman)
Signed-off-by: Adam Jackson <ajax@redhat.com>