Public module API headers don't need / shouldn't to contain anything that
isn't part of the API (non-exported functions, etc).
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1287>
This really looks like a leftover from b861aad8e2.
In any case, if that function shall become part of extension/driver API,
it should be declared with _X_EXPORT in some suitable header file - locally
declaring extern really isn't a good idea and just an invitation for subtle bugs.
Signed-off-by: Enrico Weigelt, metux IT consult <info@metux.net>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1265>
It caused an incorrect result of the blend operation.
Use glColorMask to prevent non-1.0 alpha channel values in a depth 32
pixmap backing an effective depth 24 window. For blending operations,
the expectation is that the destination drawable contains valid pixel
values, so the alpha channel should already be 1.0.
Fixes: d1f142891e ("glamor: Ignore destination alpha as necessary for composite operation")
Issue: https://gitlab.gnome.org/GNOME/mutter/-/issues/3104
With the potential modeset vs. modifiers issue covered by
commit 899c87af1f ("modesetting: unflip before any setcrtc() calls")
we can safely enable modifiers by default, at least on Intel
hardware where we know that things work properly.
I suppose the one open question is whether everything will work
correctly with wonky multi-GPU setups? I don't have one to test
myself.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
On certain system deployments, /dev/dri/card* nodes aren't directly
accessible to the currently logged in user, but the display server only
access it by asking systemd-logind to open the device for it. This
causes the X server to fail when trying to re-open the card* device
directly, causing all use of DRI3 to fail.
Fix this by using the render device path instead where possible.
This commit adds RGB565 format to XVideo with reuse of RGBA32 shader
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Yuriy Vasilev <uuvasiliev@yandex.ru>
This commit adds RGBA32 format to XVideo along with shader for handling it.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Yuriy Vasilev <uuvasiliev@yandex.ru>
This commit adds UYVY format in XVideo for Glamor
along with shader support.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
As a preparation to one-plane formats (for example, UYVY), second
texture definition is moved inside a format switch, and all allocations
now also done inside a texture switch.
No functional change.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
Xv currently calls glamor_xv_free_port_data at the end of every putImage.
This leads to shader recompilation for every frame, which is a huge performance loss.
This commit changes behaviour of glamor_xv_free_port_data, and its now is called only
if width, height or format is changed for xv port.
Shader management also done in a port now, because if shaders will be
stored in core glamor and try to be reused, this can lead to a bug if we
try to play 2 videos with different formats simultaneously.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
There is a no need to force a low version for XV shaders, it will
work on higher version too.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
The recent commit a563f530 - "glamor/glxprov: Stop exposing non-db
(-capable) configs" was aiming at reducing the number of advertised
visuals for optimizing GLX initialization.
Unfortunately, GL applications which rely exclusively on single-buffered
visuals will fail to find a suitable visual with this.
Revert the commit to expose the single-buffered visuals and restore the
compatibility with applications which rely on single-buffered configs.
This reverts commit a563f530f6
Signed-off-by: Konstantin <ria.freelander@gmail.com>
Xephyr now gained an ability to use glamor glx provider.
Unfortunately, without DRI3, we end up with same llvmpipe as before
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
This allows Xorg to use Glamor GLX when Glamor is requested,
and eliminates usage of DRI2 in case of Glamor.
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Emma Anholt <emma@anholt.net>
This code is almost entirely ddx-agnostic already, and I'd like to use
it from the other EGL glamor consumers. Which, right now that's just
Xorg, but soon it'll be Xephyr too.
This commit adds an ability to store a glvnd vendor in Glamor
structures, which can be used for initialize some vendor-based values
without hooking into DDX internals. Also this adds setting this value
into Xorg and Xwayland
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Emma Anholt <emma@anholt.net>
It is useful to know on what context we are running, and
we need to show it into xorg.log
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
This allows to choose between Glamor on OpenGL and Glamor on OpenGL ES
via an option.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
If texture can be uploaded to GL using glTexImage2D normally, but
cannot be read back using glReadPixels, we still can accelerate it,
but we cannot create pixmap with FBO using this texture type. So,
add a flag to avoid such creations.
This allow us to accelerate 8-bit glyph masks on GL ES 2.0, because those
masks are used only as textures, and in next stages are rendered on RGBA
surfaces normally, so, we do not need to call glReadPixels on them.
This is needed for correctly working fonts on GL ES 2.0, due to inability
to use GL_RED and texture swizzle. We should use GL_ALPHA there, and
with this format we cannot have a complete framebuffer. But completed
framebuffer, according to testing, is not required for fonts anyway.
Also it fixes all 8-bit formats for GLES2.
Fixes#1362Fixes#1411
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
Acked-by: Martin Roukala <martin.roukala@mupuf.org>
Some hardware (preferably mobile) working on GLES3 way faster than
on desktop GL and supports more features. This commit will allow using
GLES3 if glamor is running over GL ES, and version 3 is supported.
Changes are the following:
1. Add compatibility layer for 120/GLES2 shaders with defines in and out
2. Switch attribute and varying to in and out in almost all shaders
(aside gradient)
3. Add newGL-only frag_color variable, which defines as gl_FragColor on
old pipelines
4. Switch all shaders to use frag_color.
5. Previous commit is reverted, because now we have more than one GL ES
version, previous commit used to set version 100 for all ES shaders, which
is not true for ES 3
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
GLES3.2 spec, page 126:
> The variable gl_PointSize is intended for a shader to write
> the size of the point to be rasterized. It is measured in pixels.
> If gl_PointSize is not written to, its value
> is undefined in subsequent pipe stages.
If glamor shader is use points, we should define gl_PointSize for GLES.
On Desktop GL, it "just work" due to default gl_PointSize is 1.
As @anholt requested, define this only for minimal amount of shaders
(point and glyphbit ones), to make sure than performance will not
affected
Reviewed-by: Emma Anholt <emma@anholt.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Konstantin <ria.freelander@gmail.com>
If there is no quads to draw, then we have a possibility to call
glDrawElements with type as zero, which will generate
GL_INVALID_ENUM error. While this error is harmless, it is annoying.
Signed-off-by: Konstantin <ria.freelander@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Consider the following window hierarchy, from ancestors to descendants:
A
|
B
|
C
If both A & C have depth 32, but B has depth 24, C must effectively
behave as if it had depth 24, even if its backing pixmap has depth 32
as well.
Fixes the xmag issue described in the GitLab issue below.
Issue: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1564
It's needed for a depth 24 window backed by a depth 32 pixmap, to make
sure the window's pixels sample alpha as 1.0.
v2:
* Make sure glamor_finish_access doesn't pass in a NULL pointer.
Pass the DrawablePtr directly from glamor_finish_access to
glamor_upload_boxes. This will allow for better results if the window
depth doesn't match the backing pixmap depth.
The functions glamor_egl_fd_from_pixmap()/glamor_egl_fds_from_pixmap()
are not available without GBM support.
So if GBM is not available or too old, the code would fail to link
trying to find the references to those functions.
Make sure we skip that code when glamor is built without GBM.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
`glamor_make_current` is always called before any calls to GL.
Apply some dirty-tracking to whenever we call `glamor_make_current` so
that we can avoid a decent amount of redundant GL work on each
Dispatch cycle.
Gamescope previously was waking up an empty Xwayland server with an
XQueryPointer and I noticed a significant amount of churn doing
redundant GL work.
This has been addressed on the Gamescope side as well, but avoiding any
useless GL context switches and flushes when glamor is doing nothing
is still beneficial for CPU and power usage on portable devices.
Signed-off-by: Joshua Ashton <joshua@froggi.es>
Reviewed-by: Emma Anholt <emma@anholt.net>
Acked-by: Olivier Fourdan <ofourdan@redhat.com
This updates rootless to treat pixmaps consistently with COMPOSITE,
using the screen_x and screen_y values rather than doing hacky math.
This will allow for proper bounds checking on a given PixmapRec.
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
There are systems where softpipe is the default renderer,
e.g. when llvmpipe is not is not available. Using glamor
on such systems is never a good idea.
This mirrors what commit 0a9415cf79
did for llvmpipe.
Closes: #1417
Signed-off-by: Ivan A. Melnikov <iv@altlinux.org>
For now, it sets .version=120, which prevents shader from compiling on ES.
We just force version of shaders to be always 100 on ES, because we use
only 120 shaders on ES anyway, and all shaders works.
Signed-off-by: Konstantin Pugin <ria.freelander@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Emma Anholt <emma@anholt.net>
ARB_blend_func_extended may be exposed even without GLSL 1.30.
In order to use it we need GLES2 shaders that are available if
ARB_ES2_compatibility is exposed.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Emma Anholt <emma@anholt.net>