This code hasn't been updated with anything even resembling what anyone is
shipping in nearly thirty months. It hasn't built out of the box since
7.1. Most of its features over AIGLX are accomplished with DRI2 and
friends.
In the single output enabled case we never enter the loop and test
never gets set and so we fail to match a good mode.
This was causing my 2560x1600 to end up at 2048x1536.
Modelled after the xfree86 code. Call miDCInitialize to init the SW rendering
engine, then take the pointers, store it in a xnest-local variable, and put
the xnest-specific sprite funcs in place. In the xnest sprite funcs, call
through to the mi sprite funcs after doing xnest-specific stuff.
The problem happens if Monitor/Card combo doesn't provide EDID info,
and the XFree86-VidModeExtension extension is used.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
With the MD/SD device hierarchy we need control over the generation of the
motion history as well as the conversion later before posting it to the
client. So let's not let the drivers change it.
No x.org driver currently uses it anyway, linuxwacom doesn't either so dumping
it seems safe enough.
Recording damage from other operations (e.g. creating a client damage record)
may confuse the migration code resulting in corruption.
Option "EXAOptimizeMigration" appears safe now, so enable it by default. Also
remove it from the manpage, as it should only be necessary on request in the
course of bug report diagnostics anymore.
GNU/kFreeBSD defines __FreeBSD_kernel__, but not __FreeBSD__.
Unify preprocessor conditionals between variable declaration and use.
Debian bug #482550.
During GetPointerEvents (and others), we need to access the last coordinates
posted for this device from the driver (not as posted to the client!). Lastx/y
is ok if we only have two axes, but with more complex devices we also need to
transition between all other axes.
ABI break, recompile your input drivers.
This copies over the files generated from mesa/src/mesa/glapi. There's
a corresponding mesa commit that makes it easy to generate the glapi files
straight into the xserver tree when the XML definitions change.
The only few files that are copied from mesa but aren't generated are
glapi.[ch] and glthread.[ch]. Everything in there is technically DRI
driver API and the whole setup is still a bit fragile, but it's not a new
problem.
The --with-mesa-source configure option is still around since other
parts of the server (XGL and DMX - grep for MESA_SOURCE) need that,
but for common case of building with GLX and AIGLX support, that
option is no longer needed.
Conflicts:
Xext/xprint.c (removed in master)
config/hal.c
dix/main.c
hw/kdrive/ati/ati_cursor.c (removed in master)
hw/kdrive/i810/i810_cursor.c (removed in master)
hw/xprint/ddxInit.c (removed in master)
xkb/ddxLoad.c
If the monitor isn't reduced-blanking (either through EDID logic, or
config file setting), then remove RB modes from the default pool. Any
RB modes from the driver and config file pools will stick around though;
you asked for them, you got them.
Seeing as this code seems to be specific to OpenBSD I don't think
__x86_64__ should have been added there at all. It appears to have
been added wherever __amd64__ existed before which is wrong. I
think that part of the commit should be reverted but also all four of
the checks should be __OpenBSD__ && __amd64__ instead of two one
direction and two flipped.
The first guess used to be "is the preferred mode for one output the
preferred mode on all outputs". Instead, do "find the largest mode that's
preferred for at least one output and available on all outputs".
Old logic was just the first one that happened to have an associated
CRTC. The new logic tries to find one that's definitely connected, has
probed modes, and has the largest candidate mode.
Most of these drivers didn't work. ati was the only one that even came
close. The igs, ipaq, itsy, pcmcia, savage, sis530, trident, trio, ts300,
and vxworks directories have never built since modularisation, so clearly
no one can miss them.
It was removed and simplified some conditionals. We don't need test for
pDev->isMaster inside xf86CursorSetCursor() because only MD enters there.
In the last chunk, ScreenPriv fields were being assigned without need, so
that code was wrapped inside the conditional to avoid it.
I also tried to make the identation more sane in some parts that I touched.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Minor modification, part of the original patch led to cursors not being
updated properly when controlled through XTest.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
The only function that cat set SWCursor before xf86DeviceCursorInitialize()
is xf86InitCursor() when VCP and is created.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Missing parameter caused event processing to go nuts when checking valuators.
X.Org Bug 15936 <http://bugs.freedesktop.org/show_bug.cgi?id=15936>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
KdInitOutput() used to enable Composite when it was disabled by default,
but now this hack prevents ``-extension Composite'' from working.
Remove it, as Composite is enabled by default anyway.
Use dummy config functions to replace those from config/config.c, and
therefore do not link Xprt with $CONFIG_LIB.
Works around an endlessly spinning loop in dix/dispatch.c::Dispatch()
(WaitForSomething() not waiting) when built with dbus, which was
causing Xprt to use 95% cpu.
Since glyphs are stored in pixmaps now, they can make their way into VRAM,
which invalidates a bunch of fast-path assumptions in the XAA code. Thus
you end up doing color-expands or WriteBitmap from la-la land and your
aliased glyphs go all funny.
Since XAA isn't ever growing the ability to do sane glyph accel, just force
glyph pixmaps into host memory by catching them at CreatePixmap time.
We need a manual call to SetCursor when we switch from SW to HW rendering and
the other way round. This way we display the new cursor after removing the old
one.
In addition, we only update the internal state for the VCP's sprite. This way,
when we switch back to HW rendering the state is up-to-date and wasn't
overwritten with the other sprite's state.
The second part is a hack. It would be better to keep a state for each sprite,
but then again we don't have hardware that can render multiple cursors so we
might as well do with the hack.
Switches back to HW cursors when sprites other than the VCP are removed.
The current state requires the cursor to change shape once before it updates
to SW / HW rendering (whatever is appropriate), e.g. by moving into a
different window. Until this is done, the cursor is invisible.
This patch only creates a Files section if required, so if no entries are
added, an empty Files section will not be created.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
more conservatively (to match Linux's Wacom driver) and
we now receive all tablet-related events.
(cherry picked from commit 588683cecca2cfc65a28de035cd6ee3d64ff59d2)
LeaveVT/EnterVT cycles will free/realloc shadow frame buffers. Because of
this, the presense/absence of that data is insufficient to know whether
the screen function wrappers are necessary. Instead, the 'transform_in_use'
flag should be used.
This patch also adds 'xf86RotateFreeShadow' for drivers to use at LeaveVT
time to free the rotation data; it will be reallocated on EnterVT.
In DeleteInputDeviceRequest, leave the conf_idev (which is shared with
xf86ConfigLayout.input) alone for devices that were specified in the
ServerLayout section of the config file. This way, in the next server
generation we are left with what was the original config and can thus re-init
the devices.
This is an addon to 6d22a9615a, an attempt to
fix Bug 14418.
X.Org Bug 15645 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
X.Org Bug 14418 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
The previous check works in the master-branch, but doesn't work with MPX. We
actually copy the SD's information into the MDs public.devicePrivate, so we
need to explicitly check whether a device is a MD before freeing the module.
glcore gets linked with -ldl, -lpthread for s3tc and glapi
xserver needs
DLOPEN_LIBS - to dlopen the glcore dso
LD_EXPORT_SYMBOLS_FLAG - to export symbols for glcore to use
the ld flag is added to kdrive only when GLX is enabled, the net overhead for
Xephyr is ~155KB, could be reduced with --dynamic-list.
When starting up kdrive/fbdev, if the current framebuffer mode is sensible use
that unless told otherwise.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
XKB was disabled in 08928afb05, with the comment
"Disable XKB, as we can't yet use it". Seems like "yet" is over, running GNOME
and changing XKB settings seems to work in Xnest now.
X.Org Bug 10015 <https://bugs.freedesktop.org/show_bug.cgi?id=10015>
Use __libmansuffix__ instead of __oslibmansuffix__ which isn't getting
replaced, and rewrap some text to get __xservername__ replaced in the
description of Option "Accel" (cpp doesn't like the preceding quote).
This extension provided bug-compatibility with pre-X11R6, but has been
stubbed out in our server since 2006 to return BadRequest when you actually
asked for it.
(cherry picked from commit 4e2c6dbabdbbaaca213fd08edd422de15d0900cc)
required because of commit 7c0709a736,
which made requestingClient in dix specific to Xprint only.
Add to XPRINT_LIBS in hw/xprint/Makefile.am in front of
$(XSERVER_LIBS) to override definitions in libdix.la for standard xservers.
Follows 571206832d (providing -DXPRINT
to xprint subdirs).
Note it may be possible to restructure the code so that
requestingClient is stored elsewhere than in dix. See discussions
following http://lists.freedesktop.org/archives/xorg/2008-March/033844.html
If this is done it may be possible to revert this commit (if not 571206...).
-DXPRINT had only been set for Xprt in hw/xprint/Makefile.am
After commit 7c0709a736 it is also
required for ps/PsArea.c and PsFonts.c to ensure ‘requestingClient’ is
defined, so make it a global Xprint definition in configure.ac.
(cherry picked from commit 28a6719fd486d9a9cecad0b057d9ea7c59c66055)
The DDX (xfree86 anyway) maintains its own device list in addition to the one
in the DIX. CloseDevice will only remove it from the DIX, not the DDX. If the
server then restarts (last client disconnects), the DDX devices are still
there, will be re-initialised, then the hal devices come in and are added too.
This repeats until we run out of device ids.
This also requires us to strdup() the default pointer/keyboard in
checkCoreInputDevices.
X.Org Bug 14418 <http://bugs.freedesktop.org/show_bug.cgi?id=14418>
A few pieces of code were abusing this define for other purposes, which are
converted to #ifndef DEBUG instead. There should be no ABI consequences
to this change.
The rationale is that having the define in xorg-server.h also disables
assert() drivers, which is unexpected, and also difficult to avoid since
xorg-server.h is included in their config.h, and you can't put a #undef in
config.h. As for removing it from the server instead of moving it to an
internal header, we probably shouldn't have unnecessary assert()s in
critical server paths anyway, and if we do we could #define NDEBUG in the
specific cases needed.
Some pointer devices send key events [1], blindly getting the paired device
crashes the server. So let's check if the device is a pointer before we try to
get the paired device.
[1] The MS Wireless Optical Desktop 2000's multimedia keys are sent through
the pointer device, not through the keyboard device.
The jstk code for Joysticks is not used by any module, was never actually compiled and uses an API
that is deprecated these days.
No reason to keep it.
functions to change state when the keyboard is reloaded; instead,
pass it as an event.
(cherry picked from commit 7e653f806ff5508aace059312156f319a9ed4479)
InitValuatorDeviceClass.
Add InitProximityClassDeviceStruct call to prepare for tablet support.
(cherry picked from commit 1bd980a5b114f5320360943214f8f9f23b29c1e3)
Get rid of glcontextmodes.[ch] from build, rename __GlcontextModes to
__GLXcontext. Drop all #includes of glcontextmodes.h and glcore.h.
Drop the DRI context modes extension.
Add protocol code to DRI2 module and load DRI2 extension by default.
Since there's no way to safely know how many blocks xf86DoEDID_DDC2 would
return, add a new xf86DoEEDID entrypoint to do that, and implement the
one in terms of the other.
The latter doesn't give you the option's value, it just tells you if
it's present in the configuration. So using Option "EXANoComposite" "false"
disabled composite acceleration.
This patch (and not setting HARDWARE_CURSOR_BIT_ORDER_MSBFIRST on big endian
platforms) fixes it for me with the radeon driver and doesn't break intel.
Correct patch this time :)
Should have done this in the first place. Since we're checking for the absence
of the get_crtc callback in the first place, we'll short circuit the later call
and disable the output, so the ugly "continue" block is unnecesary.
By adding a new output callback, ->get_crtc, xf86SetDesiredModes is able to
avoid turning off outputs & CRTCs if the current output<->CRTC mappings are the
same as the desired configuration. This helps avoid flickering displays at
startup time, which speeds things up a little and looks better.
Unless we check for vtSema before calling into the CRTC and output callbacks,
we may end up trying to access video memory that no longer exists, leading to a
crash. So if we don't have vtSema, return FALSE to the caller, indicating that
we didn't do anything.
Fixes#14444.
Actually more like in the mainline case, where the ideal mode happens to
be the very first aspect match on the first monitor. But let's not
split hairs.
The address written to 0xcf8 contains the PCI slot address to send the
config cycle to. However, we would ignore that and always send the
cycle to the device whose BIOS we were running. This breaks some
integrated graphics platforms that have explicit knowledge about the
system's host bridge, for example.
While the ScreenRec's notion of size in millimeters would get updates,
the RANDR 1.1 notion wouldn't, so your screen would appear to be square
and probably at some ludicrous DPI.
xserver and libpciaccess both need to open /dev/xf86, which can only
be opened once. I implemented pci_system_init_dev_mem() like Ian
suggested. This requires some minor changes to the BSD-specific
os-support code. Since pci_system_init_dev_mem() is a no-op on
FreeBSD this should be no problem.
i.e., don't check for the end of the list by ->name == NULL, since that
won't work now. Fix the consumers of xf86DefaultModes to use the new
explicit size as well.
In order to report accurate values to users of the RandR property interface,
it's sometimes necessary to ask the driver to update the value (for example
when backlight brightness changes without the server's knowledge, due to hotkey
events or direct sysfs banging).
This patch wires up the core server code with a new xf86CrtcFuncs callback,
get_property, to allow for this.
The new code is available under the RANDR_13_INTERFACE define, which in turn
depends on the RANDR_12_INTERFACE code.
Old heuristic was to find the first monitor that expressed a preference,
then attempt to get all other monitors to agree. This doesn't work
particularly well when the two sets of modes don't precisely intersect,
you get overlapping-but-not-identical output geometry and things go wrong.
New heuristic is:
- Exact user preference, if given
- Exact output preference, if the same for all outputs
- Best (largest) mode of modes common to all outputs:
- with the same aspect ratio as all outputs (may be NULL)
- with 4:3 aspect ratio
- Then the old heuristic to try to get something lit
Note that it is simply not doable to have a reliable initial output guess if
you insist on trying to clone all outputs together. It's far too easy to
end up with displays that simply don't have modes in common. We need to
switch to right-of placement someday, once we're not limited to CRTC size
limits and we have working multi-GPU in RANDR.
If you don't do this, then Modes "800x600" in the Display subsection will
be dutifully ignored and the driver will start at whatever resolution it
feels like.
CVT is enough different from GTF that it should not be used on monitors
that aren't expecting it. This brings us closer to what the spec says
the correct behaviour is.
Before this it was meaningless to try to mark DisplayModeRec tables
const, since the mode name would be emitted as a pointer to an
anonymous string constant, and therefore would have to be fixed up by
ld.so and so couldn't live in .rodata. With this change the standard
mode lists can live in .rodata, and modes duplicated from them will
have their names filled in on the fly.
FindPCIVideoInfo() function isn't need anymore.
xf86scanpci() is being called only once so we don't need permanent
(static) variables there.
restorePciState() is not used for now (until we find why multiple
cards aren't working).
Formerly the code claimed it could only handle up to 256 visuals, which
was true. Also true, but not explicitly stated, was that it could only
handle visuals with VID < 256. If you have enough screens, and subsystems
that add lots of visuals, you can easily run off the end. (Made worse
because we allocate visual IDs from the same pool as XIDs.) If your app
then chooses a visual > 256, then the Xinerama code would throw BadMatch
on CreateColormap and your app wouldn't start.
With this change, PanoramiXVisualTable is gone. Other subsystems that
were using it as a translation table between each screen's visuals now
use a PanoramiXTranslateVisual() helper.
When an expose event happens on an host GL window paired with an
internal drawable, route that expose event to the clients listening
to the expose event on the internal drawable.
This reverts commit 3abce3ea2b and
6cbaf15e61.
The memory returned to xf86LoadModule was allocated in doLoadModule, which
calls the respective module's PreInit. As it turns out, input and output
drivers store a pointer to the module elswhere, so freeing it in
xf86LoadModule is a bad idea.
For further reference: hw/xfree86/common/xf86Helper.c
Input drivers: xf86InputDriverList[blah]->module = module;
Output drivers: xf86DriverList[blah]->module = module;
Unloading the module would not look pretty then.
Rather than letting the DDX allocate the events, allocate them once in the DIX
and just pass it around when needed.
DDX should call GetEventList() to obtain this list and then pass it into
Get{Pointer|Keyboard}Events.
LoadModule() returns the only reference to a fresh piece of memory (a
ModuleDescPtr). Sadly, xf86LoadModules dropped the return value on the floor
leaking memory for each module it loaded.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
All the failure paths were very diligent in freeing the "fullpath" temporary
string, but the success case was not. All the content only got strdup()d, so
it's not live memory anymore.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
xf86LogInit allocates a piece of memory, stores it in lf. LogInit() will then
effectively strdup it, but lf is never freed again.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
We need to start breaking the XKB API to enforce sanity, so drag whichever
headers we need to do so into the server tree, as the client API is set in
stone, being part of Xlib.
Now resizing it won't produce weir overlaps of the widgets. Thanks to
Pelle Johansson for his help showing me how to do this.
(cherry picked from commit ef3498e92d13c82633fdbe8120396bfbe1e7489a)
After trying to switch from X to VT (or just quit) the video-amd driver
attempts to issue INT 10/0 to go to mode 3 (VGA). The emulator, running
the BIOS code, would then spit out:
c000:0282: A2 ILLEGAL EXTENDED X86 OPCODE!
The opcode was 0F A2, or CPUID; it was not implemented in the emulator.
This simple patch, against 1.3.0.0, handles the CPUID instruction in one of
two ways:
1) if ran on __i386__ or __x86_64__ then it calls the CPUID instruction
directly.
2) if ran elsewhere it returns a canned 486dx4 set of values for
function 1.
This fix allows the video-amd driver to switch back to console mode,
with the GSW BIOS.
Thanks to Symbio Technologies for funding my work, and ThinCan for
providing hardware :)
Signed-off-by: Bart Trojanowski <bart@jukie.net>
Acked-by: Eric Anholt <eric@anholt.net>
'Loading foo' is verbosity 3, whereas 'already built-in' is verbosity 0.
This means that gdm's log would just be full of bare 'module already
built-in' messages.
Also fixed DarwinEQEnqueue to match changes to the callback
And also use dpmsstubs.c rather than copying the code into darwin.c
(cherry picked from commit 4c5c30a4beb7a427b00b18097f548876ad3c11d7)
xf86CrtcRotate() is called by randr 1.2 drivers via xf86CrtcSetMode() or xf86SetDesiredModes()
during ScreenInit() at which point pScrn->pScreen is not set. If a user specifies a rotation
in their config file pScrn->pScreen is dereferenced and boom.
8 bit color still doesn't work, but the -depth command line argument now works properly.
(cherry picked from commit 6765949c27c053d22882f54337cfd09203aa5383)
RISC chips that trap on unaligned loads and stores need to
define __GLX_ALIGN64. This used to get added to the cflags
in the old *.cf files but it no longer does in the modular
X server.
Also, Alpha needs to pass -mieee to the compiler as well.
This is a simple backport of a patch that debian, and probably other
distributions, have been applying forever. To the best of my
knowledge the patch was written by Jurij Smakov. See Debian bug
number #388125.
I just checked and this has been rotting for more than a year in
freedesktop bugzilla as #8392.
Signed-off-by: David S. Miller <davem@davemloft.net>
First mode is _always_ preferred in 1.4; the bit that used to mean this
now means that the preferred mode is also the native pixel format. The
old "is GTF" bit now means "is continuous-frequency" instead.
Section 3.6.4, Table 3.14: Feature Support, Notes 4 and 5.
Nothing actually decoded yet, but at least we print what they are.
New in EDID 1.4:
- Color Management Data (0xF9), Section 3.10.3.7
- CVT 3 Byte Code Descriptor (0xF8), Section 3.10.3.8
- Established Timings III Descriptor (0xF7), section 3.10.3.9
- Manufacturer-specified data tag (0x00 - 0x0F), section 3.10.3.12
I need sleep! Why am I making these stupid mistakes... sorry for pointless commit spam. ugg.
(cherry picked from commit b16351fc6457aabead328472d16dc25789032940)
General code cleanup, whitespace, dead code removal, added missing prototypes.
Made Xquartz come to foreground later in startup, so it doesn't appear for Xquartz -version
(cherry picked from commit 36922e8ff4316c93843aa3fe959cf8df3c7d5892)
(in case we get reports about slow launch times, this will
help clarify what's happening)
(cherry picked from commit 2eea3483cf893f8f81bacd434b31408dfb38cb06)
It's out of date and not included in the build. Instead, xf86DefModeSet.c is
built from vesamodes and extramodes using modeline2c.awk and *that's* what gets
built.
Sorry for the commit spam... I'm tired and was overly quick to commit... forgot to include a neccessary file.
(cherry picked from commit e564b7aeaab63e4c943445275af680b3b5898a94)
Don't hardcode X11.app's path in the launchd plist.
Only install the launchd plist if we --enable-launchd.
(cherry picked from commit 6b74c535dc331d1d621b2541492a3336f69d70a2)
Leaving xpr unflattened since we want modularity to replace that with xpc (XPluginComposite) at some point
(cherry picked from commit 48e6a75fbdd0fee86e364f02ace83f20b312a2b2)
From bugzilla bug 13467¹:
Currently the xserver fails to build without this (now deleted) file, as the
Makefile tries to distribute it. The patch simply removes the reference to
modeline2c.pl.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Signed-off-by: James Cloos <cloos@jhcloos.com>
Don't run VT switches, terminations, or anything, on the core keyboard: only
run actions which affect the keyboard state. If we get an action such as VT
switch, just swallow the event.
Taking out the trash.
We don't need dumpkeymap since we'll be getting keymaps straight from the OS. .Xmodmap should be sufficient for any user-needed changes. If this is not
the case, please let us know, so we can address any problems you have.
fullscreen never worked AFAIK
cr isn't being used and xpr is much better.
(cherry picked from commit e41af2967e885466c4d194fa4c3b358e6be37c30)
This should hopefully eliminate confusion some people have over which X11.app is which.
Now BOTH are in /A/U/X11.app and we intelligently determine whether to execute our app_to_run
or launch the server. If arguments are given, we launch the server. Otherwise if we can
connect to an X DISPLAY, we execute app_to_run. Otherwise, we launch the server.
(cherry picked from commit e7026216ccaa8e4fb073800ba947c9909d4faada)
From bugzilla bug 13467¹:
The modeline2c script is the only part of the Xorg server that requires Perl.
[This] is a simpler replacement that works with any normal AWK.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Bug was posted by Joerg Sonnenberger <joerg@NetBSD.org>.
if X is not the active application.
fixes <rdar://problem/5167664> xeyes dead until window activation
(cherry picked from commit c7573379a85a1480cc51650075078e41dafe56af)
window to another Space, it will work correctly (as opposed
to just leaving a ghost window). We accomplish this by listening
for the notification from Xplugin that our window has been moved,
and then we ask X11 to move the window to the new location.
(cherry picked from commit 2d50ea8013e7c1639d570e227b53b037fb567565)
button 2 click would actually result in a Command-2 chord.
(I.e. it wasn't releasing Command before clicking the fake button.)
(cherry picked from commit 0d5dd5dffa4c5ce3f54dfe53720a39d524dc8e37)
We free the ValuatorClassRec quite regularly. If a SIGIO is handled while
we're swapping device classes, we can bring the server down when we try to
access lastx/lasty of the master device.
window to another Space, it will work correctly (as opposed
to just leaving a ghost window). We accomplish this by listening
for the notification from Xplugin that our window has been moved,
and then we ask X11 to move the window to the new location.
This fixes an undefined symbol error happening when compiling
the server with the --disable-xv configure switch.
Basically, xnest was linking against
@XSERVER_LIBS@ and @XNEST_LIBS@ and the order of the libraries
given to the linker at the end of the process was bogus.
* configure.ac: make XNEST_LIBS contain the $XSERVER_LIBS re-ordered
in such a way that the linker finds the symbols of all the libs contained
in $XNEST_LIBS.
* hw/xnest/Makefile.am: don't link against @XSERVER_LIBS@ anymore because
XNEST_LIBS contains the right thing.
These hints allow an acceleration architecture to optimize allocation of certain
types of pixmaps, such as pixmaps that will serve as backing pixmaps for
redirected windows.
If the originating mode didn't have a name, we would end up with the name of
the original mode being setup correctly, but with the name of the copy still
being NULL.
In a multihead setup, if only the first screen can be
initialized, but the second screen is mentioned first in the
ServerLayout section, the xf86InitOrigins() function will crash
because the screen referred to in the e.g. "RightOf" part is
non-existent.
The transformation between fbdev and xfree86 mode timings needs to be
invertible, otherwise Xen and other framebuffers that don't have real
pixel clocks won't initialize.
This makes the root visual a GLX capable visual again and adds a GLX visual
for the COMPOSITE ARGB visual cleanly (as opposed to the hack we had before).
This changes the module initalization order so that the GLX module initializes
after COMPOSITE. The reason for this change is to be able to initialize a
GLX visual config for the COMPOSITE ARGB visual.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
(cherry picked from commit d627061b48)
Don't build XF86Misc or XF86Vidmode in hw/xfree86/dixmod when it's been
explicitly disabled in configure, or we don't have the proto modules
installed.
Instead of removing the preference bit marking the hardware declared mode
preference, leave it in place and just move the user preferred mode to the
front of the list while marking it with the USERPREF bit which will cause it
to be selected by the initial mode selection code.
Hide getline call by checking for glibc. If not, use fgetln instead. Even
though this section is now #ifdef'ed for linux only, this should help make
it more portable if non-linux folks end up wanting it.
It contains static paths, fails to build on non-glibc, and apparently just
exists to support distributions managing binary drivers and open-source drivers
together. Also restores previous code for fallback to vesa if nothing is
detected.
Right now we default to "all" which gives us a situation much like before,
but when the "typical" option is implemented, we can change the default and
reduce the number of visuals the GLX module bloats the X server with.
Instead of the fragile setup where we filter the modes common between the
DDX generated GLX visuals and the DRI driver generated fbconfigs, we now
just take the fbconfigs returned by the DRI driver to be our supported set.
A lot of EDID writers apparently end up stuffing centimeters (like the
maximum image size field) into the detailed timings, instead of millimeters.
Some of them only get it wrong in one direction. Also, add a quirk to let
us mark the largest 75hz mode as preferred, which will often be used for
EDID 1.0 CRTs.
If none is present, a default one will be created. This will be attached
to either the first device section in the xorg.conf (allowing you to
specify something like using EXA without having a screen section) or a
default screen section if none is present in the file.
This will allow the screen to not explicitly have a device section. If
this is the case and there is a device section in the xorg.conf, the first
one will be used. If there is no device section at all, a default one will
be created that loads the automatically determined module.
This is what we're currently shipping in Debian. Enables the ability for
drivers to ship a text file listing PCI ID's they support, and have the
server read them on startup when no driver is specified. This works, but
isn't the final solution.
* hw/kdrive/ephyr/ephyr.c:
(ephyrInitScreen): try and detect when the host has no
DRI support. In that case, switch to the -nodri behaviour.
When in the -nodri case, make sure not to skip glx visual
initialisation.
* hw/kdrive/ephyr/ephyrinit.c:
(ddxProcessArgument): disabling visual init here
is bad because it gets disabled even when we want
to use software GL, leading to Xephyr :1 -nodri
crashing in mesa.
We can now launch GL or XV apps in any of the
Xephyr screens we want.
* hw/kdrive/ephyr/hostx.c,h:
(hostx_get_window):
(hostx_create_window): make these functions be screen
number aware.
* hw/kdrive/ephyr/XF86dri.c : fix some compiler warnings.
* hw/kdrive/ephyr/ephyrdri.c:
(ephyrDRIQueryDirectRenderingCapable),
(ephyrDRIOpenConnection),
(ephyrDRIAuthConnection),
(ephyrDRICloseConnection),
(ephyrDRIGetClientDriverName),
(ephyrDRICreateContext),
(ephyrDRIDestroyContext),
(ephyrDRICreateDrawable),
(ephyrDRIGetDrawableInfo),
(ephyrDRIGetDeviceInfo): in all those functions, don't forward
the screen number we receive - from the client - to the host X.
We (Xephyr) are always targetting the same X display screen, which is
the one Xephyr got launched against. So we enforce that in the code.
* hw/kdrive/ephyr/ephyrdriext.c:
(EphyrMirrorHostVisuals): make this duplicate the visuals of the host X
default screen into a given Xephyr screen. This way we have a chance
to update the visuals of all Xephyr screen to make them mirror those
of the host X.
(many other places): specify screen number where required by the api
change in hostx.h.
* hw/kdrive/ephyr/ephyrglxext.c: specify screen number where required
by the api change in hostx.h
* hw/kdrive/ephyr/ephyrhostglx.c: don't forward the screen number we
receive - from the client - to the host X.
We (Xephyr) are always targetting the same
X display screen, which is
the one Xephyr got launched against. So we enforce that in the code.
* hw/kdrive/ephyr/ephyrhostvideo.c,h: take in account the screen number received
from the client app. This is useful to know on which Xephyr screen we
need to display video stuff.
* hw/kdrive/ephyr/ephyrvideo.c: update this to reflect the API change
in hw/kdrive/ephyr/ephyrhostvideo.h.
(ephyrSetPortAttribute): when parameters are not valid
- they exceed their validity range - send them to the host anyway
and do not return an error to clients.
Some host expose buggy validity range, so rejecting client for that
is too harsh.
* hw/kdrive/ephyr/hostx.c,h:
(hostx_has_xshape),
(hostx_has_glx),
(hostx_has_dri): added these new entry points
* hw/kdrive/ephyr/ephyrdriext.c:
(ephyrDRIExtensionInit):
check presence of DRI and XShape extensions before
trying to use them.
* hw/kdrive/ephyr/ephyrglxext.c:
(ephyrHijackGLXExtension):
check presence of glx extension before we use it.
* hw/kdrive/src/Makefile.am: use fb/fbcmap_mi.c
and not fb/fbcmap.c. This allows kdrive to take advantage of
extensions redefining the entry points of micmap.c stuff.
For instance it allows Xephyr to have a working GL, which is not
possible otherwise, because GL redefines mInitVisualsProc
to initialise its visuals.
* hw/kdrive/ephyr/ephyrdri.c:
(ephyrDRIGetDrawableInfo): force the back clipping rects
to equal the front clipping rects.
* hw/kdrive/ephyr/ephyrdriext.c:
(ProcXF86DRIGetDrawableInfo): properly overclip the clipping rects we
got from the client. This bug fixes a clipping rect that was too
small in height, basically. Also fix a possible mem corruption.
* hw/kdrive/ephyr/hostx.c:
(hostx_set_window_geometry): remove a useless XSync
* hw/kdrive/ephyr/ephyr.c:
(ephyrInitialize): cleanup ephyrDRI extension init.
remove functions that belongs in ephyrdriext.c .
* hw/kdrive/ephyr/ephyrdri.c:
(ephyrDRICreateDrawable): create the drawable on the host X peer
window, not on the host xephyr main window.
(ephyrDRIGetDrawableInfo): get drawable info of the host X peer
window.
* hw/kdrive/ephyr/ephyrdriext.c: make ephyr DRI extention wrap
a bunch of screen ops so that it can update the host X peer
window whenever DRI bound drawable are moved in Xephyr.
Also code the building blocks of the management of the
host X window peer.
* hw/kdrive/ephyr/hostx.c,h:
(hostx_create_window): added this new entry point
(hostx_destroy_window): ditto
()hostx_set_window_geometry): ditto
* hw/kdrive/ephyr/ephyrdri.c:
(ephyrDRIGetDrawableInfo): quickly hook
this into getting the drawable info from the host
X server. For the time being, this only gets the drawable info
of the Xephyr main window in the host. It should really get
the info of a the peer drawable in the host X. So there should be a
peer drawable to begin with.
* hw/kdrive/ephyr/ephyrdriext.c:
(ProcXF86DRIGetDrawableInfo): some cleanups. Properly get the
the drawable info otherwise there is a host X hang.
* hw/kdrive/ephyr/ephyrhostglx.c: do not
(ephyrHostGLXQueryVersion): do not use C bindings of the glx protocol
calls. Some of those actually access DRI context directly, resulting
in the context having three clients. Instead all XF86DRI proto
fowarding request should be coded by hand and only forward the
protocol requests
* hw/kdrive/ephyr/ephyrglxext.c:
fixed various logging functions
(ephyrGLXGetStringReal): make sure all the string is sent to clients
including the ending zero.
* hw/kdrive/ephyr/ephyrhostglx.c:
(ephyrHostGLXGetStringFromServer): better error handling.
(ephyrHostGLXSendClientInfo): ditto.
(ephyrHostGLXMakeCurrent): ditto
* hw/kdrive/ephyr/ephyr.c:
(EphyrDuplicateVisual): when duplicating the
visual, copy the color component masks and the class
from the hostX
(EphyrMirrorHostVisuals): don't mix blue and green mask.
* hw/kdrive/ephyr/ephyrdri.c: add more logs.
(ephyrDRICreateDrawable): actually implement this.
for the moment it creates a DRI drawable for the hostX window,
no matter what drawable this call was issued for.
(ephyrDRIGetDrawableInfo): actually implemented this.
for the moment the drawable info queried for its attrs is the
Xephyr main main window.
* hw/kdrive/ephyr/ephyrdriext.c:
(ProcXF86DRIGetDrawableInfo): properly hook this dispatch
function to the ephyrDRIGetDrawableInfo() function.
* hw/kdrive/ephyr/ephyrglxext.c: add a bunch of GLX implementation hooks
here. Hijack some of the xserver GLX hooks with them. Still need to
properly support byteswapped clients though.
* hw/kdrive/ephyr/ephyrhostglx.c,h: actually implemented the protocol
level forwarding functions used by the GLX entr points in
ephyrglxext.c. Here as well, there are a bunch of them, but we are
far from having implemented all the GLX calls.
* hw/kdrive/ephyr/hostx.c,h:
(hostx_get_window_attributes): added this new entry point
(hostx_allocate_resource_id_peer): added this to keep track of
resource IDs peers: one member of the peer is in Xephyr, the other
is in host X.
(hostx_get_resource_id_peer): ditto.
* hw/kdrive/ephyr/ephyr.c: make Xephyr mirror
the visuals of the host X upon startup. This
is important for GLX client apps.
* hw/kdrive/ephyr/hostx.c,h: add a hostx_get_visuals_info()
to get the visuals of the host X.
* hw/kdrive/ephyr/ephyrglxext.c:
(ephyrGLXGetFBConfigsSGIX): proxy the GLXGetFBConfigsSGIX call.
It is a vendor extension to get the visual configs as a list of
name/value pairs.
(ephyrHijackGLXExtension): hijack the VendorPriv_dispatch_info
dispatch table to register our implementation of GLXGetFBConfigsSGIX
(ephyrGLXGetFBConfigsSGIXReal): added this where the real
implementation of GLXGetFBConfigsSGIX is. It support bytes swapping.
(ephyrGLXGetFBConfigsSGIX,ephyrGLXGetFBConfigsSGIXSwap): these are
the dispatch entry points. They just call
ephyrGLXGetFBConfigsSGIXReal.
* hw/kdrive/ephyr/ephyrhostglx.c,h: reorganize the proxies to get
visual params from the host so that they clearly support the different
methods of doing so.
* hw/kdrive/ephyr/Makefile.am: add the proxy extension to
ephyr. The proxy extension is an experimental extension that
forwards protocol packets targeted at a given extension to the
host X.
* hw/kdrive/ephyr/ephyr.c: init proxy ext.
* hw/kdrive/ephyr/ephyrhostproxy.c,h: added this new file as part of the
proxy extension.
* hw/kdrive/ephyr/ephyrproxyext.c,h: ditto
* hw/kdrive/ephyr/hostx.c: add the hostx_get_get_extension_info() entry
point.
* hw/kdrive/ephyr/XF86dri.c: re format this correctly.
Make function decls honour the Ansi-C standard.
* hw/kdrive/ephyr/ephyr.c: protect glx/dri related
extension initialisation with the XEPHYR_DRI
macro. Initialize the GLX ext hijacking
at startup.
* hw/kdrive/ephyr/ephyrdri.c: add more logging to ease debugging
* hw/kdrive/ephyr/ephyrdriext.c: ditto. reformat.
* hw/kdrive/ephyr/ephyrglxext.c,h: add this extension to
proxy GLX requests to the host X. started to proxy those nedded to
make glxinfo work with fglrx. Not yet finished.
* hw/kdrive/ephyr/ephyrhostglx.c,h: put here the actual
Xlib code used to hit the host X server because Xlib stuff cannot be
mixed with xserver internal code, otherwise compilation erros due to
type clashes happen. So no Xlib type should be exported by the
entrypoints defined here.
* hw/kdrive/ephyr/ephyrhostvideo.c/h:
(ephyrHostXVStopVideo): add this entry point.
* hw/kdrive/ephyr/ephyrvideo.c:
Basically add ReputImage and StopVideo implementations.
Now, when other windows obscur the video window, the reclipping
seems to be well handled using StopVideo and ReputImage.
To do this, I was obliged to save the frame in PutImage, so
that I could resend it un ReputImage.
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrXVPrivQueryHostAdaptors): properly set
port private luke. This fixes a crash when
the host Xv supports multiple ports.
Make sure number of ports cannot be zero.
* configure.ac,include/dix-config.h.in: define the XEPHYR_DRI macro.
define it when --enable-xephyr and --enable-dri are both turned on.
* hw/kdrive/ephyr/XF86dri.c: copy this from mesa source to enable
Xephyr to talk DRI protocol the host X. In mesa, this is used by libGL.so to
talk DRI protocol with the server.
* hw/kdrive/ephyr/ephyr.c: finally initialise the DRI extension
in the ephyrInitScreen() function.
* hw/kdrive/ephyr/ephyrdri.c,ephyrdriext.c: safeguard the compilation
using the XEPHYR_DRI macro.
* hw/kdrive/ephyr/ephyrdriext.c: added this to implement a DRI extension
into Xephyr. Normally the DRI extension is only present in the
xfree86 server, but I have ported it to Xephyr. The extension calls
functions that declared/defined in ephyrdri.h ephyrdri.c that
forwards the DRI calls to the host X. It does not work yet, as this
entry is just to put the big bricks in place.
* hw/kdrive/ephyr/ephyrdri.c,h: declaration & definition of the
DRI client API that would hit the hostX server.
* hw/kdrive/ephyr/GL/internal/dri_interface.h: added this, otherwise
inclusion of /usr/include/X11/dri/xf86dri.h won't compile
* hw/kdrive/ephyr/ephyrhostvideo.c,h:
(ephyrHostXVPutImage): make this support clipping region.
The clipping region is propagated to host using XSetClipRectangles.
This changes the API of ephyrHostXVPutImage.
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrPutImage): propagate the clipping region to the new
ephyrHostXVPutImage() entry point.
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrInitVideo) make the EphyrXVPriv object be a
singleton instance, otherwise a new object is created at each
generation.
* hw/kdrive/ephyr/ephyrhostvideo.c,h:
(ephyrHostXVAdaptorHasPutVideo): detect if
host X has the PutVideo call.
(ephyrHostXVAdaptorHasPutStill): detect if
host X has the PutStill call
(ephyrHostXVAdaptorHasPutImage): detect if
host X has the PutImage call
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrXVPrivQueryHostAdaptors): make sure to create
atoms for attribute names otherwise subsequent
calls to get/set attribute from clients won't work.
(ephyrXVPrivSetAdaptorsHooks): don't hardwire advertising
of the PutImage call. Instead, advertise the calls advertised
by the host.
* hw/kdrive/ephyr/ephyrhostvideo.c,h:
(ephyrHostXVLogXErrorEvent): add this to
log X error events. Heavily copied from libx11
(ephyrHostXVErrorHandler): new x error handler that
logs the error but does not exits.
(ephyrHostXVInit): add this to be called at the beginning
of xvideo lifetime. It sets an xerror handler that does not
exit.
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrXVPrivIsAttrValueValid): this validates an attribute
value.
(ephyrSetPortAttribute): before setting an attribute,
validate the new value so that we don't send a buggy
request to host X.
* hw/kdrive/ephyr/*.c: fix case in ephyrvideo code.
* hw/kdrive/ephyr/ephyr.c: fix a typo
* hw/kdrive/ephyr/ephyrhostvideo.c,h:
(EphyrHostXVPutImage): first implementation. does not
support clipping regions yet.
* hw/kdrive/ephyr/ephyrvideo.c:
(DoSimpleClip): clip using a clipping box. Does not
support regions yet.
(EphyrPutImage): first implementation.
Uses a simple clipping rectangle, no region yet.
* hw/kdrive/ephyr/hostx.c:
(hostx_get_window): added this to get the main
window of the host x.
* hw/kdrive/ephyr/ephyrhostvideo.c,h:
(EphyrHostXVQueryImageAttributes): add this call. It calls
XvQueryBestSize xserver entry point. It uses the protocol
level machinery because Xvlib does not expose that entry point
as a C function.
(EphyrHostXVQueryBestSize): added this wrapper around XvQueryBestSize().
(EphyrHostGetAtom, EphyrHostGetAtomName): added this to get
an atom or atom name from the host server
* hw/kdrive/ephyr/ephyrvideo.c:
(EphyrSetPortAttribute): convert the atom into an host server
server atom before attacking the host server with it, because in
in its current form, the input atom is only valid in xephyr.
This fix makes this call work.
(EphyrGetPortAttribute): ditto.
(EphyrQueryBestSize): implement this.
(EphyrQueryImageAttributes): implement this.
* hw/kdrive/ephyr/ephyrhostvideo.c:
(EphyrHostXVAdaptorGetVideoFormats): properly get visual class instead of
returning the visual id.
(EphyrHostXVQueryEncodings): properly copy the fields because simple casting does
truncate some fields.
(EphyrHostAttributesDelete): XFree the whole array instead of trying to free invidial members.
* hw/kdrive/ephyr/ephyrvideo.c:
(ephyrInitVideo): fix a typo
(EphyrXVPrivQueryHostAdaptors): set XvWindowMask mask to adaptors type.
use host adaptor name. Don't forget to set nImages field.
(EphyrXVPrivRegisterAdaptors): report an error when KdXVScreenInit() fails.
* This patch adds multiscreen support to Xephyr. For instance,
the command line : "Xephyr :4 -ac -screen 320x240 -screen 640x480"
will launch with two "screens" - namely two main windows.
The first main window represents a screen that has the number :4.0, with
a geometry of 320x240 pixels, and the second one represents a screen
that has the number :4.1 with a geometry of 640x480.
The command line: "DISPLAY=:4.1 xclock" will launch the xclock program
on the second screen, for intance.
* this patch was edited by Dodji Seketeli <dodji@openedhand.com> for:
- better style compliance with the rest of the Xephyr code
- make sure Xephyr could be launched with no -screen option. By
default that creates a default screen of 640x480 pixel like before
- display full titles on the windows - with insctructions to grab
keyboard and mouse - like before.
DGAStealXXXEvent modified to take in device argument.
The evdev driver only sends one valuator when only one axis changed. We need
to check for DGA either way (xf86PostMotionEventP), otherwise we lose purely
horizontal/vertical movements.
Note that DGA does not do XI events.
Center the frame around the first pointer found and then update all pointers
on the same screen to move to the edges (if necessary).
Note: xf86WarpCursor needs to be modified, is using deprecated
miPointerWarpCursor and will kill the server when called with
inputInfo.pointer.
Removes "LookupKeyboardDevice" and "LookupPointerDevice" in favor of
inputInfo.keyboard and inputInfo.pointer, respectively; all use cases
are non-XI compliant anyway.
Matches linuxPci.c changes made in 8279444a54
Fixes compiler errors:
"ix86Pci.c", line 194: too many struct/union initializers
"ix86Pci.c", line 204: too many struct/union initializers
"ix86Pci.c", line 214: too many struct/union initializers