Save in a few special cases, _X_EXPORT should not be used in C source
files. Instead, it should be used in headers, and the proper C source
include that header. Some special cases are symbols that need to be
shared between modules, but not expected to be used by external drivers,
and symbols that are accessible via LoaderSymbol/dlopen.
This patch also adds conditionally some new sdk header files, depending
on extensions enabled. These files were added to match pattern for
other extensions/modules, that is, have the headers "deciding" symbol
visibility in the sdk. These headers are:
o Xext/panoramiXsrv.h, Xext/panoramiX.h
o fbpict.h (unconditionally)
o vidmodeproc.h
o mioverlay.h (unconditionally, used only by xaa)
o xfixes.h (unconditionally, symbols required by dri2)
LoaderSymbol and similar functions now don't have different prototypes,
in loaderProcs.h and xf86Module.h, so that both headers can be included,
without the need of defining IN_LOADER.
xf86NewInputDevice() device prototype readded to xf86Xinput.h, but
not exported (and with a comment about it).
Also, no need to call ShowCursor when SetCursorPosition already does it
Based on a previous patch by Maarten Maathuis
Signed-off-by: Keith Packard <keithp@keithp.com>
Rather than assuming rules in the CoreKeyboardProc, init the default rules in
InitCoreDevices, then re-use them later.
In the xfree86 DDX, set the rules to "base" or "evdev", depending on whether
we'll load kbd or evdev.
If we create a new MD, use pc105,us as default and re-use the rules file used
previously.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
When leaving 3D games such as quake3 or sauerbraten, a cursor may stay on the
screen. This is caused by one run of SW rendering for the SD, even though the
SD was attached to the VCP and thus has HW rendering capabilities.
Check for the SD's attachment (like in all other functions) before deciding on
SW or HW rendering.
X.Org Bug 16805 <http://bugs.freedesktop.org/show_bug.cgi?id=16805>
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Just ignore devices after MAXDEVICES has been reached, but warn the user that
the devices are ignored.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
This is done to actually change DIX_CFLAGS, as not all "modules" use
XORG_CFLAGS.
Also export the symbols that are required by other modules after
the change.
- Remove remaining references to XFree86-Misc options AllowNonLocalModInDev
and DisableModInDev.
- Remove remaining references to grab-breaking keys & associated options.
- Update description of Ctrl-Alt-Backspace to new -retro/DontZap defaults.
- Add description of new options -modalias and -showopts.
- Update list of modules loaded by default.
- Update input driver references from keyboard to evdev & kbd.
- Update list of driver man pages to match xf86-*-* drivers with man pages.
- Add See Also section to exa man page.
and various formatting/typo/etc. fixes.
The Xorg/xorg.conf sections on input device selection could use further
updates to better match the current state of HAL-enabled configuration.
The warnings corrected were only the ones that should correct
real problems. The most common one is 64 bit integers as
"printf %l" arguments.
Note that there is a patch related to this at:
http://bugs.freedesktop.org/show_bug.cgi?id=18204
These symbols were removed from the X Server, or never declared.
One symbol that may need special attention is XkbBuildCoreState(),
that doesn't have a prototype anywhere, but is called from
xkb/xkbEvents.c:XkbFilterEvents(), and also used by the macros
XkbStateFieldFromRec() and XkbGrabStateFromRec() defined in
include/xkbstr.h.
fb/wfbrename.h also may need some cleanup, as it makes several
"renames" of non existing symbols.
This is the biggest "visibility" patch. Instead of doing a "export"
symbol on demand, export everything in the sdk, so that if some module
fails due to an unresolved symbol, it is because it is using a symbol
not in the sdk.
Most exported symbols shouldn't really be made visible, neither
advertised in the sdk, as they are only used by a single shared object.
Symbols in the sdk (or referenced in sdk macros), but not defined
anywhere include:
XkbBuildCoreState()
XkbInitialMap
XkbXIUnsupported
XkbCheckActionVMods()
XkbSendCompatNotify()
XkbDDXFakePointerButton()
XkbDDXApplyConfig()
_XkbStrCaseCmp()
_XkbErrMessages[]
_XkbErrCode
_XkbErrLocation
_XkbErrData
XkbAccessXDetailText()
XkbNKNDetailMaskText()
XkbLookupGroupAndLevel()
XkbInitAtoms()
XkbGetOrderedDrawables()
XkbFreeOrderedDrawables()
XkbConvertXkbComponents()
XkbWriteXKBSemantics()
XkbWriteXKBLayout()
XkbWriteXKBKeymap()
XkbWriteXKBFile()
XkbWriteCFile()
XkbWriteXKMFile()
XkbWriteToServer()
XkbMergeFile()
XkmFindTOCEntry()
XkmReadFileSection()
XkmReadFileSectionName()
InitExtInput()
xf86CheckButton()
xf86SwitchCoreDevice()
RamDacSetGamma()
RamDacRestoreDACValues()
xf86Bpp
xf86ConfigPix24
xf86MouseCflags[]
xf86SupportedMouseTypes[]
xf86NumMouseTypes
xf86ChangeBusIndex()
xf86EntityEnter()
xf86EntityLeave()
xf86WrapperInit()
xf86RingBell()
xf86findOptionBoolean()
xf86debugListOptions()
LoadSubModuleLocal()
LoaderSymbolLocal()
getInt10Rec()
xf86CurrentScreen
xf86ReallocatePciResources()
xf86NewSerialNumber()
xf86RandRSetInitialMode()
fbCompositeSolidMask_nx1xn
fbCompositeSolidMask_nx8888x0565C
fbCompositeSolidMask_nx8888x8888C
fbCompositeSolidMask_nx8x0565
fbCompositeSolidMask_nx8x0888
fbCompositeSolidMask_nx8x8888
fbCompositeSrc_0565x0565
fbCompositeSrc_8888x0565
fbCompositeSrc_8888x0888
fbCompositeSrc_8888x8888
fbCompositeSrcAdd_1000x1000
fbCompositeSrcAdd_8000x8000
fbCompositeSrcAdd_8888x8888
fbGeneration
fbIn
fbOver
fbOver24
fbOverlayGeneration
fbRasterizeEdges
fbRestoreAreas
fbSaveAreas
composeFunctions
VBEBuildVbeModeList()
VBECalcVbeModeIndex()
TIramdac3030CalculateMNPForClock()
shadowBufPtr
shadowFindBuf()
miRRGetScreenInfo()
RRSetScreenConfig()
RRModePruneUnused()
PixmanImageFromPicture()
extern int miPointerGetMotionEvents()
miClipPicture()
miRasterizeTriangle()
fbPush1toN()
fbInitializeBackingStore()
ddxBeforeReset()
SetupSprite()
InitSprite()
DGADeliverEvent()
SPECIAL CASES
o defined as _X_INTERNAL
xf86NewInputDevice()
o defined as static
fbGCPrivateKey
fbOverlayScreenPrivateKey
fbScreenPrivateKey
fbWinPrivateKey
o defined in libXfont.so, but declared in xorg/dixfont.h
GetGlyphs()
QueryGlyphExtents()
QueryTextExtents()
ParseGlyphCachingMode()
InitGlyphCaching()
SetGlyphCachingMode()
This patch exports all symbols required by the compilable
(in a x86 linux computer) xorg/driver/* modules.
Still missing symbols worth mentioning are:
sunleo
miFindMaxBand no longer available
intel (uxa/uxa-accel.c)
fbShmPutImage no longer available (and should have been static)
mga
MGAGetClientPointer (should come from matrox's libhal)
This is not a definitive "visibility" patch, as all it does is to
export missing symbols, but the modules that current don't compile,
may require more symbols once fixed, and third party drivers should
also require more symbols exported.
A "definitive" patch should export symbols defined in the sdk.
../../../../hw/xfree86/xaa/xaaTables.c:9:14: warning: symbol 'byte_expand3' was not declared. Should it be static?
../../../../hw/xfree86/xaa/xaaTables.c:53:14: warning: symbol 'byte_reversed_expand3' was not declared. Should it be static?
The patch removes all macros in the format
define xf86_sym ((type (*)(argument-list))LoaderSymbol("sym"))
creates a new macro in the format
define xf86_sym sym
and ensures "sym" is a "visible" symbol.
The patch doesn't add or remove features, and is source and binary
compatible with previous shared objects (with the difference that it
requires the dlloader).
These symbols are a special case, as, due to the fact that LoaderSymbol
was being used to reference them, they are not easily found by "automated"
tools that check for missing symbols. And now it also have the benefit
that the compiler/loader "knows what is going on".
pixman 0.13.2 now holds all of the matrix operations. This leaves
the protocol conversion routines and some ABI stubs in place
Signed-off-by: Keith Packard <keithp@keithp.com>
Includes fixes for:
"xf86Config.c", line 2434: warning: argument #1 is incompatible with prototype:
prototype: pointer to struct _DisplayModeRec: "xf86.h", line 351
argument : pointer to const struct _DisplayModeRec
"xf86EdidModes.c", line 312: warning: argument #1 is incompatible with prototype:
prototype: pointer to struct _DisplayModeRec: "../../../hw/xfree86/common/xf86.h", line 351
argument : pointer to const struct _DisplayModeRec
"xf86EdidModes.c", line 438: warning: assignment type mismatch:
pointer to struct _DisplayModeRec "=" pointer to const struct _DisplayModeRec
"xf86Modes.c", line 701: warning: assignment type mismatch:
pointer to struct _DisplayModeRec "=" pointer to const struct _DisplayModeRec
helper_exec.c: In function ‘port_rep_inb’:
helper_exec.c:219: warning: implicit declaration of function
‘DEBUG_IO_TRACE’
helper_exec.c:219: warning: nested extern declaration of
‘DEBUG_IO_TRACE’
Doing projective transforms required repositioning the cursor using the
hotspot, but that requires relocating the upper left corner in terms of said
hotspot.
Instead of using a separate function to notify DIX about transform changes,
add the transform to RRCrtcNotify so that the whole Crtc state changes
atomically.
RandR matrix computations lose too much precision in fixed point;
computations using the inverted matrix can be as much as 10 pixels off.
Convert them to double precision values and pass those around. These API
changes are fairly heavyweight; the official Render interface remains fixed
point, so the fixed point matrix comes along for the ride everywhere.
Add APIs to xf86RandR12 support and randr extension to record whether the
driver supports transforms, report that value in the RRGetCrtcTransform
reply.
New RRCrtcGetTransform function in DIX that DDX can use to get the pending
transform. The DDX code should be complete; the DIX code is just a stub at
this point.
Drivers that care about crtc positions on the screen to ensure that vblank
works correctly need to be notified when crtcs are changed.
Provide a hook in the mode setting code that is invoked whenever any
configuration is done to the screen.
Use this new hook in the DRI code so that DRI clients are notified and
receive updated information.
Signed-off-by: Keith Packard <keithp@keithp.com>
The xfree86 server previously hat NewInputDeviceRequest and InitInput, and
both basically did the same thing. Reduce NIDR to parameter checking and use
xf86NewInputDevice from both InitInput and NIDR to actually create the device.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Signed-off-by: Adam Jackson <ajax@redhat.com>
When the linux kernel sets the NX bit vm86 segfaults when it tries to execute
code in memory that is not marked EXEC. Such code gets called whenever
we return from a VBIOS call to signal the calling program that the call
is actually finished and that we are not trapping for other reasons (like
IO accesses).
Use mprotect(2) to set these memory ranges PROT_EXEC.
There's little chance that we'll get the input devices at runtime without HAL,
we might as well force the server to add mouse/kbd devices automatically -
just like in the olden days.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
These values need not be constrained to integer values.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
As reported in http://bugs.freedesktop.org/show_bug.cgi?id=18438
the server suggests reconfiguring HAL if AllowEmptyInput is enabled
and no input devices are known.
Instead of that notice, if HAL is disabled at configure time,
AllowEmptyInput is enabled in the config and no input devices are
found report those facts and recommend disabling AllowEmptyInput.
When using any XAAPixmapOps, we call into unknown but freshly
unwrapped callbacks (like fb ones). Unlike the XAA*Fallback calls,
we did so without syncing first, exposing us to all kinds of
synchronisation issues.
I believe that the rendering errors appeared now because *PaintWindow
vanished (e4d11e58), and we just use miPaintWindow instead. This
takes a less direct route to the hw and ends up at
PolyFillRectPixmap, which very often left drawing artifacts.
We now sync accordingly, and no longer get the rendering artifacts i
was methodically reproducing on radeonhd, radeon, unichrome...
Also, in order to allow driver authors to remove extensive syncing
or flushing to hide this issue, create XAA_VERSION_ defines, put
them in xaa.h and bump the patchlevel.
(novell bug #435791)
When setting the depth to 24, leave bpp unset so the logic to pick
a supported value is used instead of ignoring the driver's preference
and forcing 32 bpp.
Works around a silly bug in the kernel that causes wakeup storms after
too many keypresses. Should fix the kernel bug too, but this at least
keeps the idle wakeup count below 1000/sec.
When a user specifies the position of an output for which no modes exist
(for whatever reason) assume that the width and height of this output
is 0. The result will be the same as if this output isn't taken into
consideration at all and thus should be sane. It will prevent a segfault
when trying to determine the width and height of a non-existent mode.
Also merge sun_bios.c into sun_vid.c and upstream Solaris patch to
keep aperture device open, to allow mappings to occur after X server
has given up uid 0.
Maybe one day I stop doing stupid patches like
a3a7c12fcf.
So, if X < low, reset to low, and _not_ to high.
If X > high, reset to high, and _not_ to low.
This consists of two parts:
In the implicit server layout, ignore those drivers when looking for a core
device.
And after finishing the server layout, run through the list of devices and
remove any that use mouse or kbd.
AEI is mutually exclusive with the kbd and mouse drivers, so pick either - or.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
The precedence of == is higher than that of &, so that code was
probably buggy.
xf86Init.c: In function 'DoModalias':
xf86Init.c:300: warning: suggest parentheses around comparison in operand of &
xf86Init.c:304: warning: suggest parentheses around comparison in operand of &
xf86Init.c:308: warning: suggest parentheses around comparison in operand of &
xf86Init.c:136: warning: function declaration isn't a prototype
xf86Init.c:243: warning: function declaration isn't a prototype
xf86Init.c:249: warning: function declaration isn't a prototype
xaaInit.c: In function 'XAAInit':
xaaInit.c:201: warning: implicit declaration of function 'miInitializeCompositeWrapper'
xaaInit.c:201: warning: nested extern declaration of 'miInitializeCompositeWrapper'
Add missing includes to fix the following warnings:
xf86DGA.c: In function 'DGAProcessKeyboardEvent':
xf86DGA.c:1050: warning: implicit declaration of function 'UpdateDeviceState'
xf86DGA.c:1050: warning: nested extern declaration of 'UpdateDeviceState'
xf86Xinput.c: In function 'xf86ActivateDevice':
xf86Xinput.c:303: warning: implicit declaration of function 'AssignTypeAndName'
xf86Xinput.c:303: warning: nested extern declaration of 'AssignTypeAndName'
xf86Xinput.c:311: warning: implicit declaration of function 'DeviceIsPointerType'
xf86Xinput.c:311: warning: nested extern declaration of 'DeviceIsPointerType'
xf86Xinput.c:324: warning: implicit declaration of function 'XkbSetExtension'
xf86Xinput.c:324: warning: nested extern declaration of 'XkbSetExtension'
Add automatic detection of the graphic driver to load for sbus devices.
This allows xorg to work on those devices without a "Device" section.
Debian bug#483942.
Signed-off-by: Julien Cristau <jcristau@debian.org>
Also set AutoAddDevices and AutoEnableDevices to their defaults.
And in doing so, switch the rest of the defaults over to named intializers.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Usually, the console is set to RAW in the kbd driver. If we hotplug all input
devices (i.e. the evdev driver for keyboards) and the console is left as-is.
As a result, the evdev driver must put an EVIOCGRAB on the device to avoid
characters leaking onto the console. This again breaks many things, amongst
them lirc, in-kernel mouse button emulation and HAL.
This patch sets the console to RAW if AllowEmptyInput is on.
Use-cases:
1. AEI is off
1.1. Only kbd driver is used - behaviour as-is.
1.2. kbd and evdev driver is used: if evdev does not grab the device,
duplicate events are generated.
2. AEI is on
2.1. Only evdev driver is used - behaviour as-is, but evdev does not need
to grab the device anymore.
2.2. evdev and kbd are used: duplicate key events are generated if evdev
does not grab the device.
1.2 is a marginal use-case that can be fixed by adding a "grab" option to the
evdev driver (update of xorg.conf is needed).
2.2 is an issue. If we have no ServerLayout section, AEI is on, but devices
specified in the xorg.conf are still added [1], resulting in duplicate events.
This is a common configuration and needs sorting out.
[1] 2eaed4a10f
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
I've seen about one case in three years where this has actually been
correlated with the real cause of failure, and we've trained people to
freak out about X_WARNING, so let's be less alarmist.
Very cute, Samsung, not only do you claim to be 16cm by 9cm in the
global size record, you also claim to be 160mm by 90mm in the detailed
timings. Grrr.
NIDR should be used to create a new SD from e.g. within a driver.
DIDR should be used to remove a device from the server.
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
If you need to bail out the server, use Ctrl-Alt-Fx, or enable zapping
if it bothers you that much. If Ctrl-Alt-Fx is broken, nag me until
it's permanently fixed.
The nvidia driver currently uses these hooks to work around problems where RAC
will disable access to the hardware at unexpected times. This change restores
these hooks until we can come up with a better API for working around RAC.
This reverts commit c1df4fbede.
The nvidia driver currently uses these callbacks to work around problems where
RAC will disable access to the hardware at unexpected times. This change
restores these hooks until we can come up with a better API for working around
RAC.
This reverts commit d7c0ba2e9e.
Conflicts:
hw/xfree86/loader/xf86sym.c
Some BIOSes (hi XGI!) will attempt to enumerate the PCI bus by asking
for the config space of every possible device number. This despite
perfectly functional BIOS methods to enumerate the bus exactly.
If anyone can come up with an example of a bus where:
- both i/o and memory resources are addressable
- access to them can be controlled
- but they can't be controlled independently
then by all means, reinstate this logic.
Under the terms of version 1.1, "once Covered Code has been published
under a particular version of the License, Recipient may, for the
duration of the License, continue to use it under the terms of that
version, or choose to use such Covered Code under the terms of any
subsequent version published by SGI."
FreeB 2.0 license refers to "dates of first publication". They are here
taken to be 1991-2000, as noted in the original license text:
** Original Code. The Original Code is: OpenGL Sample Implementation,
** Version 1.2.1, released January 26, 2000, developed by Silicon Graphics,
** Inc. The Original Code is Copyright (c) 1991-2000 Silicon Graphics, Inc.
** Copyright in any portions created by third parties is as indicated
** elsewhere herein. All Rights Reserved.
Official FreeB 2.0 text:
http://oss.sgi.com/projects/FreeB/SGIFreeSWLicB.2.0.pdf
As always, this code has not been tested for conformance with the OpenGL
specification. OpenGL conformance testing is available from
http://khronos.org/ and is required for use of the OpenGL logo in
product advertising and promotion.
The check can fail because the output from FBIOGET_VSCREENINFO is used to set
Clock in fbdev2xfree_timing(). Then in fbdevHWSetMode(), xfree2fbdev_timing()
is called which sets the pixclock based on Clock. The resulting circle results
in slight rounding errors, causing the comparision check in fbdev_modes_equal
to fail.
- Redo damage naming for more consistency.
- Call post submission functions only where appropriate.
- EXA can now live without it's odd damage workarounds.
No point warning about missing driver hooks, that just means the person
who gave you the driver is inept. Might as well just crash. Also,
just name anonymous screens as screen%d instead of failing after the 36th
screen. Bonus points if you can figure out what the failure mode would
be on the 36th screen, and what the effective screen limit was.
There is no way this code can have been building for anyone since pciaccess
was merged. BSD and Linux were already using OS code on sparc, the only
people who could want this are Solaris, who should be using pciaccess
anyway.
This code was effectively only used in ix86Pci.c to select PCI config
access type. Nobody should be using that path anymore, in the glorious
pciaccess world; kernel services should get it right for you.
the defaults from InitVelocityData() or hypothetic driver-side changes
are now respected, not overridden.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
If the device doesn't have any BARs then it's just a stub for some
lame operating systems that need one PCI device per output for
multihead. No point in warning about it.
It's all a bit wonky since both sis(4) and xgi(4) claim to support the
Volari Z7 and V5/8 (0x0020 and 0x0040), so let's side with xgi(4), why
not. Note that the V3 (not V3XT) identifies itself as a trident chip.
Put out a warning if xorg.conf has InputDevice sections, but these aren't
referenced in the used ServerLayout. This is only performed if AllowEmptyInput
is enabled.
The reason behind this is that the server used to auto-add the first
mouse/keyboard sections if none where referenced. Now, with HAL and AEI
enabled by default, setups that relied on this auto-adding break and are left
without input devices. The least we can do is warn them.
Us shipping a GUI configuration utility (especially as part of the
server!) was pretty pointless. There was pretty much nothing it could
configure which wasn't already runtime adjustable: if you could get a
server up with functioning input and output, there wasn't much xorgcfg
could do for you.
Au revoir.
- Use a single common function to compute reducedness.
- Call it from both the old-school and new-school mode validation paths.
- Define monitor reduced-blanking support in accord with EDID 1.4.
- Attempt to filter RB DMT modes away from the "standard" EDID pool if
the monitor doesn't claim RB support.
On some panels you end up with all of:
- No range descriptor
- No description of physical connectivity
- Native panel size mode in standard timings list
In principle you're supposed to use the timings for that mode from the DMT
spec, but in practice the DMT spec has timings for both 1920x1200 normal
and 1920x1200RB, and the standard timing field gives you no way to
distinguish. And, of course, the non-RB timings don't fit in a single
DVI link.
A couple #if defined(Lynx) && defined(sun) had become just if defined(sun),
resulting in wrong settings for Solaris builds, so they're now just deleted.
OsInitColors always just returned TRUE, so just remove calls to it and
insane special-case logic. Remove unused kcolor.c implementation, and
merge oscolor.h into oscolor.c since it was the only user. Remove
open-coded strncasecmp in oscolor.c.
Since we no longer need to call OsInitColors after reading the config
file, just call PostConfigInit() from one place, and move PM handling to
one place so we can install the signal handlers earlier.
If devices are prepended to the list, their wake-up order on resume is not the
same as the original initialisation order. Hot-plugged devices, originally
inited last, are re-enabled before the xorg.conf devices and in some cases may
steal the device files. Result: we have different devices before and after
suspend/resume.
RedHat Bug 439386 <https://bugzilla.redhat.com/show_bug.cgi?id=439386>
- Allow returning multiple drivers to try for a given PCI id (for instance,
try "geode" then "amd" for AMD Geode hardware)
- On Solaris, use VIS_GETIDENTIFIER ioctl as well as PCI id to choose drivers
- Use wsfb instead of fbdev as a fallback on non-Linux SPARC platforms
Remove AEI check from configImpliedLayout as the setting isn't actually parsed
at this point anyway (written by Sasha Hlusiak).
Resurrect checkInput() and check for devices there if AEI is false (this also
creates the default devices if required).
Set AllowEmptyInput to enabled by default if hotplugging is enabled.
If no Screen is specified in the ServerLayout section, either take the first
one from the config file or autogenerate a default screen.
X.Org Bug 16301 <http://bugs.freedesktop.org/show_bug.cgi?id=16301>
RandR 1.1 has a physical size for each mode. It used to be that the DIX would
remember these modes and pass them back up to the DDX when changing the screen
configuration. The DDX uses RR_GET_MODE_MM to query the driver for the physical
dimensions of the screen, allowing it to preserve the DPI.
With RandR 1.2, the physical dimensions are stored as part of the output, rather
than per mode. The DIX only uses the sizes passed in from the DDX to select the
mode pool for the "default" output, and forgets the physical sizes. Then, when
reconfiguring the screen, it makes up a new RRScreenSizeRec using the dimensions
from the output, screwing up the DPI.
This change works around this problem by ignoring the DIX and querying the real
size from the driver.
This reverts commit 76576c87b0.
which was an incorrect revert of previous ABI bumps. Those
responsible for the accidental ABI bumps in both directions
have all been sacked.
This allows xf86-input-mouse to build again, for example.
Spiritual revert of 1fa4de80fc. Intel's C
compiler claims to be gcc-compatible; if they're not defining the same
macros as gcc then that's their bug, not ours. Even if we were to do
this aliasing we should do it once and for all in servermd.h.
Use only %di to name the PCI register to read/write, rather than %edi.
DOS is only expecting the base PCI config space anyway, and the BIOS
might be using the high bits of %edi.
Yes, this is a 486+ instruction and thus not strictly legal in vm86
mode, but enough BIOSes use it (looking at you VIA) that we might as
well implement it.
In the single output enabled case we never enter the loop and test
never gets set and so we fail to match a good mode.
This was causing my 2560x1600 to end up at 2048x1536.
The problem happens if Monitor/Card combo doesn't provide EDID info,
and the XFree86-VidModeExtension extension is used.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Recording damage from other operations (e.g. creating a client damage record)
may confuse the migration code resulting in corruption.
Option "EXAOptimizeMigration" appears safe now, so enable it by default. Also
remove it from the manpage, as it should only be necessary on request in the
course of bug report diagnostics anymore.
GNU/kFreeBSD defines __FreeBSD_kernel__, but not __FreeBSD__.
Unify preprocessor conditionals between variable declaration and use.
Debian bug #482550.
During GetPointerEvents (and others), we need to access the last coordinates
posted for this device from the driver (not as posted to the client!). Lastx/y
is ok if we only have two axes, but with more complex devices we also need to
transition between all other axes.
ABI break, recompile your input drivers.
This copies over the files generated from mesa/src/mesa/glapi. There's
a corresponding mesa commit that makes it easy to generate the glapi files
straight into the xserver tree when the XML definitions change.
The only few files that are copied from mesa but aren't generated are
glapi.[ch] and glthread.[ch]. Everything in there is technically DRI
driver API and the whole setup is still a bit fragile, but it's not a new
problem.
The --with-mesa-source configure option is still around since other
parts of the server (XGL and DMX - grep for MESA_SOURCE) need that,
but for common case of building with GLX and AIGLX support, that
option is no longer needed.
Conflicts:
Xext/xprint.c (removed in master)
config/hal.c
dix/main.c
hw/kdrive/ati/ati_cursor.c (removed in master)
hw/kdrive/i810/i810_cursor.c (removed in master)
hw/xprint/ddxInit.c (removed in master)
xkb/ddxLoad.c
If the monitor isn't reduced-blanking (either through EDID logic, or
config file setting), then remove RB modes from the default pool. Any
RB modes from the driver and config file pools will stick around though;
you asked for them, you got them.
Seeing as this code seems to be specific to OpenBSD I don't think
__x86_64__ should have been added there at all. It appears to have
been added wherever __amd64__ existed before which is wrong. I
think that part of the commit should be reverted but also all four of
the checks should be __OpenBSD__ && __amd64__ instead of two one
direction and two flipped.
The first guess used to be "is the preferred mode for one output the
preferred mode on all outputs". Instead, do "find the largest mode that's
preferred for at least one output and available on all outputs".
Old logic was just the first one that happened to have an associated
CRTC. The new logic tries to find one that's definitely connected, has
probed modes, and has the largest candidate mode.
It was removed and simplified some conditionals. We don't need test for
pDev->isMaster inside xf86CursorSetCursor() because only MD enters there.
In the last chunk, ScreenPriv fields were being assigned without need, so
that code was wrapped inside the conditional to avoid it.
I also tried to make the identation more sane in some parts that I touched.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Minor modification, part of the original patch led to cursors not being
updated properly when controlled through XTest.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
The only function that cat set SWCursor before xf86DeviceCursorInitialize()
is xf86InitCursor() when VCP and is created.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Missing parameter caused event processing to go nuts when checking valuators.
X.Org Bug 15936 <http://bugs.freedesktop.org/show_bug.cgi?id=15936>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Since glyphs are stored in pixmaps now, they can make their way into VRAM,
which invalidates a bunch of fast-path assumptions in the XAA code. Thus
you end up doing color-expands or WriteBitmap from la-la land and your
aliased glyphs go all funny.
Since XAA isn't ever growing the ability to do sane glyph accel, just force
glyph pixmaps into host memory by catching them at CreatePixmap time.
We need a manual call to SetCursor when we switch from SW to HW rendering and
the other way round. This way we display the new cursor after removing the old
one.
In addition, we only update the internal state for the VCP's sprite. This way,
when we switch back to HW rendering the state is up-to-date and wasn't
overwritten with the other sprite's state.
The second part is a hack. It would be better to keep a state for each sprite,
but then again we don't have hardware that can render multiple cursors so we
might as well do with the hack.
Switches back to HW cursors when sprites other than the VCP are removed.
The current state requires the cursor to change shape once before it updates
to SW / HW rendering (whatever is appropriate), e.g. by moving into a
different window. Until this is done, the cursor is invisible.
This patch only creates a Files section if required, so if no entries are
added, an empty Files section will not be created.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
LeaveVT/EnterVT cycles will free/realloc shadow frame buffers. Because of
this, the presense/absence of that data is insufficient to know whether
the screen function wrappers are necessary. Instead, the 'transform_in_use'
flag should be used.
This patch also adds 'xf86RotateFreeShadow' for drivers to use at LeaveVT
time to free the rotation data; it will be reallocated on EnterVT.
In DeleteInputDeviceRequest, leave the conf_idev (which is shared with
xf86ConfigLayout.input) alone for devices that were specified in the
ServerLayout section of the config file. This way, in the next server
generation we are left with what was the original config and can thus re-init
the devices.
This is an addon to 6d22a9615a, an attempt to
fix Bug 14418.
X.Org Bug 15645 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
X.Org Bug 14418 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
The previous check works in the master-branch, but doesn't work with MPX. We
actually copy the SD's information into the MDs public.devicePrivate, so we
need to explicitly check whether a device is a MD before freeing the module.
Use __libmansuffix__ instead of __oslibmansuffix__ which isn't getting
replaced, and rewrap some text to get __xservername__ replaced in the
description of Option "Accel" (cpp doesn't like the preceding quote).
This extension provided bug-compatibility with pre-X11R6, but has been
stubbed out in our server since 2006 to return BadRequest when you actually
asked for it.
The DDX (xfree86 anyway) maintains its own device list in addition to the one
in the DIX. CloseDevice will only remove it from the DIX, not the DDX. If the
server then restarts (last client disconnects), the DDX devices are still
there, will be re-initialised, then the hal devices come in and are added too.
This repeats until we run out of device ids.
This also requires us to strdup() the default pointer/keyboard in
checkCoreInputDevices.
X.Org Bug 14418 <http://bugs.freedesktop.org/show_bug.cgi?id=14418>
Some pointer devices send key events [1], blindly getting the paired device
crashes the server. So let's check if the device is a pointer before we try to
get the paired device.
[1] The MS Wireless Optical Desktop 2000's multimedia keys are sent through
the pointer device, not through the keyboard device.
The jstk code for Joysticks is not used by any module, was never actually compiled and uses an API
that is deprecated these days.
No reason to keep it.
Get rid of glcontextmodes.[ch] from build, rename __GlcontextModes to
__GLXcontext. Drop all #includes of glcontextmodes.h and glcore.h.
Drop the DRI context modes extension.
Add protocol code to DRI2 module and load DRI2 extension by default.
Since there's no way to safely know how many blocks xf86DoEDID_DDC2 would
return, add a new xf86DoEEDID entrypoint to do that, and implement the
one in terms of the other.
The latter doesn't give you the option's value, it just tells you if
it's present in the configuration. So using Option "EXANoComposite" "false"
disabled composite acceleration.
This patch (and not setting HARDWARE_CURSOR_BIT_ORDER_MSBFIRST on big endian
platforms) fixes it for me with the radeon driver and doesn't break intel.
Correct patch this time :)
Should have done this in the first place. Since we're checking for the absence
of the get_crtc callback in the first place, we'll short circuit the later call
and disable the output, so the ugly "continue" block is unnecesary.
By adding a new output callback, ->get_crtc, xf86SetDesiredModes is able to
avoid turning off outputs & CRTCs if the current output<->CRTC mappings are the
same as the desired configuration. This helps avoid flickering displays at
startup time, which speeds things up a little and looks better.
Unless we check for vtSema before calling into the CRTC and output callbacks,
we may end up trying to access video memory that no longer exists, leading to a
crash. So if we don't have vtSema, return FALSE to the caller, indicating that
we didn't do anything.
Fixes#14444.
Actually more like in the mainline case, where the ideal mode happens to
be the very first aspect match on the first monitor. But let's not
split hairs.
The address written to 0xcf8 contains the PCI slot address to send the
config cycle to. However, we would ignore that and always send the
cycle to the device whose BIOS we were running. This breaks some
integrated graphics platforms that have explicit knowledge about the
system's host bridge, for example.
While the ScreenRec's notion of size in millimeters would get updates,
the RANDR 1.1 notion wouldn't, so your screen would appear to be square
and probably at some ludicrous DPI.
xserver and libpciaccess both need to open /dev/xf86, which can only
be opened once. I implemented pci_system_init_dev_mem() like Ian
suggested. This requires some minor changes to the BSD-specific
os-support code. Since pci_system_init_dev_mem() is a no-op on
FreeBSD this should be no problem.
i.e., don't check for the end of the list by ->name == NULL, since that
won't work now. Fix the consumers of xf86DefaultModes to use the new
explicit size as well.
In order to report accurate values to users of the RandR property interface,
it's sometimes necessary to ask the driver to update the value (for example
when backlight brightness changes without the server's knowledge, due to hotkey
events or direct sysfs banging).
This patch wires up the core server code with a new xf86CrtcFuncs callback,
get_property, to allow for this.
The new code is available under the RANDR_13_INTERFACE define, which in turn
depends on the RANDR_12_INTERFACE code.
Old heuristic was to find the first monitor that expressed a preference,
then attempt to get all other monitors to agree. This doesn't work
particularly well when the two sets of modes don't precisely intersect,
you get overlapping-but-not-identical output geometry and things go wrong.
New heuristic is:
- Exact user preference, if given
- Exact output preference, if the same for all outputs
- Best (largest) mode of modes common to all outputs:
- with the same aspect ratio as all outputs (may be NULL)
- with 4:3 aspect ratio
- Then the old heuristic to try to get something lit
Note that it is simply not doable to have a reliable initial output guess if
you insist on trying to clone all outputs together. It's far too easy to
end up with displays that simply don't have modes in common. We need to
switch to right-of placement someday, once we're not limited to CRTC size
limits and we have working multi-GPU in RANDR.
If you don't do this, then Modes "800x600" in the Display subsection will
be dutifully ignored and the driver will start at whatever resolution it
feels like.
CVT is enough different from GTF that it should not be used on monitors
that aren't expecting it. This brings us closer to what the spec says
the correct behaviour is.
Before this it was meaningless to try to mark DisplayModeRec tables
const, since the mode name would be emitted as a pointer to an
anonymous string constant, and therefore would have to be fixed up by
ld.so and so couldn't live in .rodata. With this change the standard
mode lists can live in .rodata, and modes duplicated from them will
have their names filled in on the fly.
FindPCIVideoInfo() function isn't need anymore.
xf86scanpci() is being called only once so we don't need permanent
(static) variables there.
restorePciState() is not used for now (until we find why multiple
cards aren't working).
Formerly the code claimed it could only handle up to 256 visuals, which
was true. Also true, but not explicitly stated, was that it could only
handle visuals with VID < 256. If you have enough screens, and subsystems
that add lots of visuals, you can easily run off the end. (Made worse
because we allocate visual IDs from the same pool as XIDs.) If your app
then chooses a visual > 256, then the Xinerama code would throw BadMatch
on CreateColormap and your app wouldn't start.
With this change, PanoramiXVisualTable is gone. Other subsystems that
were using it as a translation table between each screen's visuals now
use a PanoramiXTranslateVisual() helper.
This reverts commit 3abce3ea2b and
6cbaf15e61.
The memory returned to xf86LoadModule was allocated in doLoadModule, which
calls the respective module's PreInit. As it turns out, input and output
drivers store a pointer to the module elswhere, so freeing it in
xf86LoadModule is a bad idea.
For further reference: hw/xfree86/common/xf86Helper.c
Input drivers: xf86InputDriverList[blah]->module = module;
Output drivers: xf86DriverList[blah]->module = module;
Unloading the module would not look pretty then.
Rather than letting the DDX allocate the events, allocate them once in the DIX
and just pass it around when needed.
DDX should call GetEventList() to obtain this list and then pass it into
Get{Pointer|Keyboard}Events.
LoadModule() returns the only reference to a fresh piece of memory (a
ModuleDescPtr). Sadly, xf86LoadModules dropped the return value on the floor
leaking memory for each module it loaded.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
All the failure paths were very diligent in freeing the "fullpath" temporary
string, but the success case was not. All the content only got strdup()d, so
it's not live memory anymore.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
xf86LogInit allocates a piece of memory, stores it in lf. LogInit() will then
effectively strdup it, but lf is never freed again.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
We need to start breaking the XKB API to enforce sanity, so drag whichever
headers we need to do so into the server tree, as the client API is set in
stone, being part of Xlib.
After trying to switch from X to VT (or just quit) the video-amd driver
attempts to issue INT 10/0 to go to mode 3 (VGA). The emulator, running
the BIOS code, would then spit out:
c000:0282: A2 ILLEGAL EXTENDED X86 OPCODE!
The opcode was 0F A2, or CPUID; it was not implemented in the emulator.
This simple patch, against 1.3.0.0, handles the CPUID instruction in one of
two ways:
1) if ran on __i386__ or __x86_64__ then it calls the CPUID instruction
directly.
2) if ran elsewhere it returns a canned 486dx4 set of values for
function 1.
This fix allows the video-amd driver to switch back to console mode,
with the GSW BIOS.
Thanks to Symbio Technologies for funding my work, and ThinCan for
providing hardware :)
Signed-off-by: Bart Trojanowski <bart@jukie.net>
Acked-by: Eric Anholt <eric@anholt.net>
'Loading foo' is verbosity 3, whereas 'already built-in' is verbosity 0.
This means that gdm's log would just be full of bare 'module already
built-in' messages.
xf86CrtcRotate() is called by randr 1.2 drivers via xf86CrtcSetMode() or xf86SetDesiredModes()
during ScreenInit() at which point pScrn->pScreen is not set. If a user specifies a rotation
in their config file pScrn->pScreen is dereferenced and boom.
First mode is _always_ preferred in 1.4; the bit that used to mean this
now means that the preferred mode is also the native pixel format. The
old "is GTF" bit now means "is continuous-frequency" instead.
Section 3.6.4, Table 3.14: Feature Support, Notes 4 and 5.
Nothing actually decoded yet, but at least we print what they are.
New in EDID 1.4:
- Color Management Data (0xF9), Section 3.10.3.7
- CVT 3 Byte Code Descriptor (0xF8), Section 3.10.3.8
- Established Timings III Descriptor (0xF7), section 3.10.3.9
- Manufacturer-specified data tag (0x00 - 0x0F), section 3.10.3.12
It's out of date and not included in the build. Instead, xf86DefModeSet.c is
built from vesamodes and extramodes using modeline2c.awk and *that's* what gets
built.
From bugzilla bug 13467¹:
Currently the xserver fails to build without this (now deleted) file, as the
Makefile tries to distribute it. The patch simply removes the reference to
modeline2c.pl.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Signed-off-by: James Cloos <cloos@jhcloos.com>
Don't run VT switches, terminations, or anything, on the core keyboard: only
run actions which affect the keyboard state. If we get an action such as VT
switch, just swallow the event.
From bugzilla bug 13467¹:
The modeline2c script is the only part of the Xorg server that requires Perl.
[This] is a simpler replacement that works with any normal AWK.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Bug was posted by Joerg Sonnenberger <joerg@NetBSD.org>.
We free the ValuatorClassRec quite regularly. If a SIGIO is handled while
we're swapping device classes, we can bring the server down when we try to
access lastx/lasty of the master device.
These hints allow an acceleration architecture to optimize allocation of certain
types of pixmaps, such as pixmaps that will serve as backing pixmaps for
redirected windows.
If the originating mode didn't have a name, we would end up with the name of
the original mode being setup correctly, but with the name of the copy still
being NULL.
In a multihead setup, if only the first screen can be
initialized, but the second screen is mentioned first in the
ServerLayout section, the xf86InitOrigins() function will crash
because the screen referred to in the e.g. "RightOf" part is
non-existent.
The transformation between fbdev and xfree86 mode timings needs to be
invertible, otherwise Xen and other framebuffers that don't have real
pixel clocks won't initialize.
This makes the root visual a GLX capable visual again and adds a GLX visual
for the COMPOSITE ARGB visual cleanly (as opposed to the hack we had before).
This changes the module initalization order so that the GLX module initializes
after COMPOSITE. The reason for this change is to be able to initialize a
GLX visual config for the COMPOSITE ARGB visual.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
(cherry picked from commit d627061b48)
Don't build XF86Misc or XF86Vidmode in hw/xfree86/dixmod when it's been
explicitly disabled in configure, or we don't have the proto modules
installed.
Instead of removing the preference bit marking the hardware declared mode
preference, leave it in place and just move the user preferred mode to the
front of the list while marking it with the USERPREF bit which will cause it
to be selected by the initial mode selection code.
Hide getline call by checking for glibc. If not, use fgetln instead. Even
though this section is now #ifdef'ed for linux only, this should help make
it more portable if non-linux folks end up wanting it.
It contains static paths, fails to build on non-glibc, and apparently just
exists to support distributions managing binary drivers and open-source drivers
together. Also restores previous code for fallback to vesa if nothing is
detected.
Right now we default to "all" which gives us a situation much like before,
but when the "typical" option is implemented, we can change the default and
reduce the number of visuals the GLX module bloats the X server with.
Instead of the fragile setup where we filter the modes common between the
DDX generated GLX visuals and the DRI driver generated fbconfigs, we now
just take the fbconfigs returned by the DRI driver to be our supported set.
A lot of EDID writers apparently end up stuffing centimeters (like the
maximum image size field) into the detailed timings, instead of millimeters.
Some of them only get it wrong in one direction. Also, add a quirk to let
us mark the largest 75hz mode as preferred, which will often be used for
EDID 1.0 CRTs.
If none is present, a default one will be created. This will be attached
to either the first device section in the xorg.conf (allowing you to
specify something like using EXA without having a screen section) or a
default screen section if none is present in the file.
This will allow the screen to not explicitly have a device section. If
this is the case and there is a device section in the xorg.conf, the first
one will be used. If there is no device section at all, a default one will
be created that loads the automatically determined module.
This is what we're currently shipping in Debian. Enables the ability for
drivers to ship a text file listing PCI ID's they support, and have the
server read them on startup when no driver is specified. This works, but
isn't the final solution.
DGAStealXXXEvent modified to take in device argument.
The evdev driver only sends one valuator when only one axis changed. We need
to check for DGA either way (xf86PostMotionEventP), otherwise we lose purely
horizontal/vertical movements.
Note that DGA does not do XI events.
Center the frame around the first pointer found and then update all pointers
on the same screen to move to the edges (if necessary).
Note: xf86WarpCursor needs to be modified, is using deprecated
miPointerWarpCursor and will kill the server when called with
inputInfo.pointer.
Removes "LookupKeyboardDevice" and "LookupPointerDevice" in favor of
inputInfo.keyboard and inputInfo.pointer, respectively; all use cases
are non-XI compliant anyway.
Matches linuxPci.c changes made in 8279444a54
Fixes compiler errors:
"ix86Pci.c", line 194: too many struct/union initializers
"ix86Pci.c", line 204: too many struct/union initializers
"ix86Pci.c", line 214: too many struct/union initializers
When processing events from the EQ, _always_ call the processInputProc of the
matching device. For XI devices, this proc is wrapped in three layers.
Core event handling is wrapped by XI event handling, which is wrapped by XKB.
A core event now passes through XKB -> XI -> DIX.
This gets rid of a sync'd grab problem: with the previous code, core events
did disappear during a sync'd device grab on account of mieqProcessInputEvents
calling the processInputProc of the VCP/VCK instead of the actual device. This
lead to the event being processed as normal instead of being enqueued for
later replaying.
Even though they're defined to zero by the spec, we've seen an EDID block
where the (empty) ASCII strings were stuffed in a byte early, leading to the
descriptor being considered a detailed timing instead.
This was an attempt to avoid scratch gc creation and validation for paintwin
because that was expensive. This is not the case in current servers, and the
danger of failure to implement it correctly (as seen in all previous
implementations) is high enough to justify removing it. No performance
difference detected with x11perf -create -move -resize -circulate on Xvfb.
Leave the screen hooks for PaintWindow* in for now to avoid ABI change.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
Add keyc->postdown, which represents the key state as of the last mieqEnqueue
call, and use it when we need to know the posted state, instead of the
processed state (keyc->down). Add small functions to getevents.c to query and
modify key state in postdown and use them all through, eliminating previously
broken uses.
In commit 41bb9fce47, the event delivery loop
for Xinput enabled keyboards was changed and accidentally used the wrong
index variable, causing random events to be delivered when returning from VT
switch.
In addition, in commit aeba855b07,
SIGIO was blocked during delivery of these events, but not for the entire
period the xf86Events array was being used. Block SIGIO for the whole loop
to avoid other event delivery from trashing the key release events.
(cherry picked from commit aa7ed1f5f35cd043bc38d985500aa0a32e857e84)
Previously, the server version reported by xdpyinfo and Xorg -version would
bear some vague resemblance to a X.Org katamari version, but in the presence
of modularization (and client-server relationships with different katamari
versions on each side) those numbers don't really make sense. Instead, just
report the package version.
When branching a stable branch, master's version should be immediately updated
to the endpoint of the stable branch plus a snapshot of 1 (for example,
1.4.0.1 after server-1.4-branch). The stable branch should then be changed to
RC0 at that time (1.3.99.0, for example).
This scheme was partially attempted for server 1.3, but lacked the appropriate
master updates, thus why it had to be revisited now. While here, we can also
remove a lot of versioning complexity since everything is based on the package
version.
One of these I introduced by listing dix and mi in the same library list to
simplify other servers. The other had been hacked around using libosandcommon,
which is now gone.
This cleans up server Makefile.ams a little bit, but also means that people
messing with configure.ac need to be careful with whether they put libraries
in the _LIBS or _SYS_LIBS targets. Hopefully the comment in configure.ac will
clarify the issues.
The author of the int10 code looked at the VBIOS POSTing code
in DOSEMU to get some initial idea on how to POST a VBIOS.
To give credit to the DOSEMU Team for this inspiration a comment
was added to the code which could suggest that code from the
GPLed DOSEMU was directly incorporated into this code.
This patch should clearify the situation.
Note that pciaccess doesn't yet have Net/OpenBSD support, but the relevant
code should go there instead of disconnected code in the X Server.
While here, remove the now-disabled INCLUDE_XF86_NO_DOMAIN from the headers,
and un-disable xf8StdAccResFromOS for those OSes without domain support which
will need it.
over to new system.
Need to update documentation and address some remaining vestiges of
old system such as CursorRec structure, fb "offman" structure, and
FontRec privates.
This was a bunch of poorly defined resource ranges per OS/platform combination
which were supposed to represent what regions could potentially have resources
allocated into them.
Composite's automatic redirection is a more general mechanism than the
ad-hoc BS machinery, so it's much prettier to implement the one in terms
of the other. Composite now wraps ChangeWindowAttributes and activates
automatic redirection for windows with backing store requested. The old
backing store infrastructure is completely gutted: ABI-visible structures
retain the function pointers, but they never get called, and all the
open-coded conditionals throughout the DIX layer to implement BS are gone.
Note that this is still not a strictly complete implementation of backing
store, since Composite will throw the bits away on unmap and therefore
WhenMapped and Always hints are equivalent.
xf86RandR12ScreenSetSize must protect calls to EnableDisableFBAccess with
suitable vtSema checks to avoid invoking driver code while the X server is
inactive.
The multi-crtc cursor code in hw/xfree86/modes holds a reference to the
current cursor. This reference must be correctly ref counted so the cursor
is not freed out from underneath this code.
This is where they should have been in the first place. All the rest of
the code in the server defines such things in the source files, not the
headers.
A bunch of CFLAGS had gone missing, so the build failed with errors like:
../../../../../hw/xfree86/os-support/linux/lnx_ev56.c:7:19: error: input.h: No such file or directory
../../../../../hw/xfree86/os-support/linux/lnx_ev56.c:8:24: error: scrnintstr.h: No such file or directory
As a result, we can remove the quirks that existed to flip the bits back around
for us. This is not confirmed in all cases due to lack of bugs containing EDID
blocks associated with the quirks, but is likely true.
The outport is most likely unnecessary on any currently used hardware,
the byte copy is necessary from what I know on IA64 and friends so leave it.
Add a new API entry point which lets a driver select the old behaviour if
such a needs is ever found.
This gives me ~20% speed up on startup on 945 hardware.
If NoAutoAddDevices is given as a server flag, then no devices will be added
from HAL events at all. If NoAutoEnableDevices is given, then the devices will
be added (and the DevicePresenceNotify sent), but not enabled, thus leaving
policy up to the client.
Make sure the font path is always 'built-ins' when we use built-in fonts,
rather than having it as a fixed path for a while, then clobbering it
halfway through startup.
Set the new randr crtc of the output before the output change notification is
delivered to the clients.
Remove RROutputSetCrtc as it is not really necessary. All we have to do is set
the output's crtc on RRCrtcNotify
When the PreferredMode option is selected in the config file, remove the
M_T_PREFERRED bit from all other preferred modes to force the config file
mode to be selected.
Code that disabled mode detection on disabled outputs would confuse
applications by listing said outputs as connected but without any modes.
This makes the disabled state in the config file affect only the initial
configuration and not subsequent modifications by RandR.
The DDX code was ignoring pending properties for computing when mode setting
was required. This meant that configurations differing only in property
values would not cause the mode to be set.
If your loader is as bad as elfloader, then it makes sense for the
server to have some stubs for you to assign to / break on. However it
is no longer 1996.
I made a mistake in some new code using MakeAtom, passing the size of the
string instead of the length of the string. Figuring there might be other
such mistakes, I reviewed the server code and found four bugs of the same
form.
The previous scheme didn't work when the client didn't create the core drawable,
e.g. the root or composite overlay window. Use refcounting via special client
resources to fix that.
Print debug messages only when the appropriate debug bit is set in the
8086 state vector, so you can focus in on the call you're actually
interested in.
This is to avoid issues with redirected windows which are located partly or
fully outside of a screen edge, resulting in unusual cliprects which the 3D
drivers generally can't handle. The symptoms in such cases would be incorrect
rendering or even crashes or hangs.
at server startup, and not against the virtual X/Y parameters
as they can change.
This fixes an issue when canGrow is TRUE and modes get dropped
when using the virtual X/Y parameters.
Major stylistic cleanups, greatly expanded cross-reference ("SEE ALSO")
section and some typo fixes.
This patch by Branden Robinson. Forward-ported by Fabio M. Di Nitto.
When the root window changed size, xf86XVFillKeyHelper would not revalidate
the GC, leaving the clip at the old size causing lossage (and possibly
memory corruption if the screen and frame buffer shrank).
Fixed by just using a scratch GC; saving memory, eliminating bugs and
shrinking the code.
To be used by AIGLX for GLX_EXT_texture_from_pixmap without several data copies.
The texOffsetStart hook must make sure that the given pixmap is accessible by
the GPU for texturing and return an 'offset' that can be used by the 3D
driver for that purpose.
The texOffsetFinish hook is called when the pixmap is no longer being used for
texturing.
DRI uses a non-screen block/wakeup handler which will be executed after the
screen block handler finishes. To ensure that the rotation block handler is
executed under the DRI lock, dynamically wrap the screen block handler for
rotation.
Leaving devices enabled during server startup can cause problems during the
initial mode setting in the server, especially when they are used for
different purposes by the X server than by the BIOS. Disabling all of them
before any mode setting is attempted provides a stable base upon which the
remaining mode setting operations can be built.
The code in hw/xfree86/os-support/bus/sparcPci.c:simbaCheckBus()
is trying to mimmick VGA routing by disabling I/O space responses
behind the Simba PCI-PCI controller.
Unfortunately, doing this also happens to disable access to the
IDE controller I/O space registers, thus crashing the system. The
granularity of the I/O disabling in the Simba controller is not
fine enough to disable VGA without also disabling the IDE controller
registers.
When the modules section is parsed, if a module is set to be loaded by
default, this will be logged. If it is redundantly specified in xorg.conf,
this will also be noted. None of this logging will happen if the xorg.conf
lacks a modules section.
This provides a new option, UseDefaultFontPath. This option is enabled by
default, and causes the X server to always append the default font path
(defined at compile time) to the font path for the server. This will allow
people to specify additional font paths if they want without breaking
their font path, thus hopefully avoiding ye olde "fixed front" problem.
Because this option is a ServerFlag option, the ServerFlags need to be
processed before the files section of the config file, so swap the order
that they are processed.
Provide default modules that may be overrided easily. Previously the
server would load a set of default modules, but only if none were
specified in the xorg.conf, or if you didn't have a xorg.conf at all. This
patch provides a default set and you can add only the "Load" instructions
to xorg.conf that you want without losing the defaults. Similarly, if you
don't want to load a module that's loaded by default, you can add "Disable
modulename" to your xorg.conf (see man xorg.conf in this release for
details). This allows for a minimal "Modules" section, where the user only
need specify what they want to be different. See bug #10541 for more.
The list of default modules is taken from the set loaded by default when
there was a xorg.conf containing no "Modules" section.
A potential problem for some users is that some users disable a module,
most notably DRI, by commenting out the "Load" line in their xorg.conf.
This needs to be changed to an uncommented "Disable" line, as DRI is
loaded by default.
Add RawDeviceEvent (pointers only for now).
This commit changes the event queue to use EventLists instead of xEvent
arrays. Only EQ is affected, event delivery still uses xEvent* (look for
comment in mieqProcessInputEvent).
RawDeviceEvents deliver driver information to the client, without clipping or
acceleration.
Formerly we sized an array with a compile time constant, then initialized
its size to the same constant, but the Linux PCI init code would increase
that "constant". So if you happened to have more than 128 PCI devices,
you'd happily scribble into whatever variables happened to be in .bss
after that array.
Only really fixed for Linux atm. Other OSes will simply (still) fail to
work on video devices above the 128th PCI device.
SourceValidate is used exclusively by the software cursor code to pull the
cursor off of the screen before using the screen as a source operand. This
eliminates the software cursor from the frame buffer while painting the
rotated image though. Disabling this function by temporarily setting the
screen function pointer to NULL causes the cursor image to be captured.
(cherry picked from commit 05e1c45ade9c558820685bfd2541617a2e8de816)