Formerly the code claimed it could only handle up to 256 visuals, which
was true. Also true, but not explicitly stated, was that it could only
handle visuals with VID < 256. If you have enough screens, and subsystems
that add lots of visuals, you can easily run off the end. (Made worse
because we allocate visual IDs from the same pool as XIDs.) If your app
then chooses a visual > 256, then the Xinerama code would throw BadMatch
on CreateColormap and your app wouldn't start.
With this change, PanoramiXVisualTable is gone. Other subsystems that
were using it as a translation table between each screen's visuals now
use a PanoramiXTranslateVisual() helper.
This reverts commit 3abce3ea2b and
6cbaf15e61.
The memory returned to xf86LoadModule was allocated in doLoadModule, which
calls the respective module's PreInit. As it turns out, input and output
drivers store a pointer to the module elswhere, so freeing it in
xf86LoadModule is a bad idea.
For further reference: hw/xfree86/common/xf86Helper.c
Input drivers: xf86InputDriverList[blah]->module = module;
Output drivers: xf86DriverList[blah]->module = module;
Unloading the module would not look pretty then.
LoadModule() returns the only reference to a fresh piece of memory (a
ModuleDescPtr). Sadly, xf86LoadModules dropped the return value on the floor
leaking memory for each module it loaded.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
All the failure paths were very diligent in freeing the "fullpath" temporary
string, but the success case was not. All the content only got strdup()d, so
it's not live memory anymore.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
xf86LogInit allocates a piece of memory, stores it in lf. LogInit() will then
effectively strdup it, but lf is never freed again.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
We need to start breaking the XKB API to enforce sanity, so drag whichever
headers we need to do so into the server tree, as the client API is set in
stone, being part of Xlib.
After trying to switch from X to VT (or just quit) the video-amd driver
attempts to issue INT 10/0 to go to mode 3 (VGA). The emulator, running
the BIOS code, would then spit out:
c000:0282: A2 ILLEGAL EXTENDED X86 OPCODE!
The opcode was 0F A2, or CPUID; it was not implemented in the emulator.
This simple patch, against 1.3.0.0, handles the CPUID instruction in one of
two ways:
1) if ran on __i386__ or __x86_64__ then it calls the CPUID instruction
directly.
2) if ran elsewhere it returns a canned 486dx4 set of values for
function 1.
This fix allows the video-amd driver to switch back to console mode,
with the GSW BIOS.
Thanks to Symbio Technologies for funding my work, and ThinCan for
providing hardware :)
Signed-off-by: Bart Trojanowski <bart@jukie.net>
Acked-by: Eric Anholt <eric@anholt.net>
'Loading foo' is verbosity 3, whereas 'already built-in' is verbosity 0.
This means that gdm's log would just be full of bare 'module already
built-in' messages.
xf86CrtcRotate() is called by randr 1.2 drivers via xf86CrtcSetMode() or xf86SetDesiredModes()
during ScreenInit() at which point pScrn->pScreen is not set. If a user specifies a rotation
in their config file pScrn->pScreen is dereferenced and boom.
First mode is _always_ preferred in 1.4; the bit that used to mean this
now means that the preferred mode is also the native pixel format. The
old "is GTF" bit now means "is continuous-frequency" instead.
Section 3.6.4, Table 3.14: Feature Support, Notes 4 and 5.
Nothing actually decoded yet, but at least we print what they are.
New in EDID 1.4:
- Color Management Data (0xF9), Section 3.10.3.7
- CVT 3 Byte Code Descriptor (0xF8), Section 3.10.3.8
- Established Timings III Descriptor (0xF7), section 3.10.3.9
- Manufacturer-specified data tag (0x00 - 0x0F), section 3.10.3.12
It's out of date and not included in the build. Instead, xf86DefModeSet.c is
built from vesamodes and extramodes using modeline2c.awk and *that's* what gets
built.
From bugzilla bug 13467¹:
Currently the xserver fails to build without this (now deleted) file, as the
Makefile tries to distribute it. The patch simply removes the reference to
modeline2c.pl.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Signed-off-by: James Cloos <cloos@jhcloos.com>
Don't run VT switches, terminations, or anything, on the core keyboard: only
run actions which affect the keyboard state. If we get an action such as VT
switch, just swallow the event.
From bugzilla bug 13467¹:
The modeline2c script is the only part of the Xorg server that requires Perl.
[This] is a simpler replacement that works with any normal AWK.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Bug was posted by Joerg Sonnenberger <joerg@NetBSD.org>.
These hints allow an acceleration architecture to optimize allocation of certain
types of pixmaps, such as pixmaps that will serve as backing pixmaps for
redirected windows.
If the originating mode didn't have a name, we would end up with the name of
the original mode being setup correctly, but with the name of the copy still
being NULL.
In a multihead setup, if only the first screen can be
initialized, but the second screen is mentioned first in the
ServerLayout section, the xf86InitOrigins() function will crash
because the screen referred to in the e.g. "RightOf" part is
non-existent.
The transformation between fbdev and xfree86 mode timings needs to be
invertible, otherwise Xen and other framebuffers that don't have real
pixel clocks won't initialize.
This makes the root visual a GLX capable visual again and adds a GLX visual
for the COMPOSITE ARGB visual cleanly (as opposed to the hack we had before).
This changes the module initalization order so that the GLX module initializes
after COMPOSITE. The reason for this change is to be able to initialize a
GLX visual config for the COMPOSITE ARGB visual.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
(cherry picked from commit d627061b48)
Don't build XF86Misc or XF86Vidmode in hw/xfree86/dixmod when it's been
explicitly disabled in configure, or we don't have the proto modules
installed.
Instead of removing the preference bit marking the hardware declared mode
preference, leave it in place and just move the user preferred mode to the
front of the list while marking it with the USERPREF bit which will cause it
to be selected by the initial mode selection code.
Hide getline call by checking for glibc. If not, use fgetln instead. Even
though this section is now #ifdef'ed for linux only, this should help make
it more portable if non-linux folks end up wanting it.
It contains static paths, fails to build on non-glibc, and apparently just
exists to support distributions managing binary drivers and open-source drivers
together. Also restores previous code for fallback to vesa if nothing is
detected.
Right now we default to "all" which gives us a situation much like before,
but when the "typical" option is implemented, we can change the default and
reduce the number of visuals the GLX module bloats the X server with.
Instead of the fragile setup where we filter the modes common between the
DDX generated GLX visuals and the DRI driver generated fbconfigs, we now
just take the fbconfigs returned by the DRI driver to be our supported set.