Merge tag 'pm+acpi-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
authorLinus Torvalds <torvalds@linux-foundation.org>
Tue, 10 Feb 2015 23:09:41 +0000 (15:09 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 10 Feb 2015 23:09:41 +0000 (15:09 -0800)
Pull ACPI and power management updates from Rafael Wysocki:
 "We have a few new features this time, including a new SFI-based
  cpufreq driver, a new devfreq driver for Tegra Activity Monitor, a new
  devfreq class for providing its governors with raw utilization data
  and a new ACPI driver for AMD SoCs.

  Still, the majority of changes here are reworks of existing code to
  make it more straightforward or to prepare it for implementing new
  features on top of it.  The primary example is the rework of ACPI
  resources handling from Jiang Liu, Thomas Gleixner and Lv Zheng with
  support for IOAPIC hotplug implemented on top of it, but there is
  quite a number of changes of this kind in the cpufreq core, ACPICA,
  ACPI EC driver, ACPI processor driver and the generic power domains
  core code too.

  The most active developer is Viresh Kumar with his cpufreq changes.

  Specifics:

   - Rework of the core ACPI resources parsing code to fix issues in it
     and make using resource offsets more convenient and consolidation
     of some resource-handing code in a couple of places that have grown
     analagous data structures and code to cover the the same gap in the
     core (Jiang Liu, Thomas Gleixner, Lv Zheng).

   - ACPI-based IOAPIC hotplug support on top of the resources handling
     rework (Jiang Liu, Yinghai Lu).

   - ACPICA update to upstream release 20150204 including an interrupt
     handling rework that allows drivers to install raw handlers for
     ACPI GPEs which then become entirely responsible for the given GPE
     and the ACPICA core code won't touch it (Lv Zheng, David E Box,
     Octavian Purdila).

   - ACPI EC driver rework to fix several concurrency issues and other
     problems related to events handling on top of the ACPICA's new
     support for raw GPE handlers (Lv Zheng).

   - New ACPI driver for AMD SoCs analogous to the LPSS (Low-Power
     Subsystem) driver for Intel chips (Ken Xue).

   - Two minor fixes of the ACPI LPSS driver (Heikki Krogerus, Jarkko
     Nikula).

   - Two new blacklist entries for machines (Samsung 730U3E/740U3E and
     510R) where the native backlight interface doesn't work correctly
     while the ACPI one does (Hans de Goede).

   - Rework of the ACPI processor driver's handling of idle states to
     make the code more straightforward and less bloated overall (Rafael
     J Wysocki).

   - Assorted minor fixes related to ACPI and SFI (Andreas Ruprecht,
     Andy Shevchenko, Hanjun Guo, Jan Beulich, Rafael J Wysocki, Yaowei
     Bai).

   - PCI core power management modification to avoid resuming (some)
     runtime-suspended devices during system suspend if they are in the
     right states already (Rafael J Wysocki).

   - New SFI-based cpufreq driver for Intel platforms using SFI
     (Srinidhi Kasagar).

   - cpufreq core fixes, cleanups and simplifications (Viresh Kumar,
     Doug Anderson, Wolfram Sang).

   - SkyLake CPU support and other updates for the intel_pstate driver
     (Kristen Carlson Accardi, Srinivas Pandruvada).

   - cpufreq-dt driver cleanup (Markus Elfring).

   - Init fix for the ARM big.LITTLE cpuidle driver (Sudeep Holla).

   - Generic power domains core code fixes and cleanups (Ulf Hansson).

   - Operating Performance Points (OPP) core code cleanups and kernel
     documentation update (Nishanth Menon).

   - New dabugfs interface to make the list of PM QoS constraints
     available to user space (Nishanth Menon).

   - New devfreq driver for Tegra Activity Monitor (Tomeu Vizoso).

   - New devfreq class (devfreq_event) to provide raw utilization data
     to devfreq governors (Chanwoo Choi).

   - Assorted minor fixes and cleanups related to power management
     (Andreas Ruprecht, Krzysztof Kozlowski, Rickard Strandqvist, Pavel
     Machek, Todd E Brandt, Wonhong Kwon).

   - turbostat updates (Len Brown) and cpupower Makefile improvement
     (Sriram Raghunathan)"

* tag 'pm+acpi-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (151 commits)
  tools/power turbostat: relax dependency on APERF_MSR
  tools/power turbostat: relax dependency on invariant TSC
  Merge branch 'pci/host-generic' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into acpi-resources
  tools/power turbostat: decode MSR_*_PERF_LIMIT_REASONS
  tools/power turbostat: relax dependency on root permission
  ACPI / video: Add disable_native_backlight quirk for Samsung 510R
  ACPI / PM: Remove unneeded nested #ifdef
  USB / PM: Remove unneeded #ifdef and associated dead code
  intel_pstate: provide option to only use intel_pstate with HWP
  ACPI / EC: Add GPE reference counting debugging messages
  ACPI / EC: Add query flushing support
  ACPI / EC: Refine command storm prevention support
  ACPI / EC: Add command flushing support.
  ACPI / EC: Introduce STARTED/STOPPED flags to replace BLOCKED flag
  ACPI: add AMD ACPI2Platform device support for x86 system
  ACPI / table: remove duplicate NULL check for the handler of acpi_table_parse()
  ACPI / EC: Update revision due to raw handler mode.
  ACPI / EC: Reduce ec_poll() by referencing the last register access timestamp.
  ACPI / EC: Fix several GPE handling issues by deploying ACPI_GPE_DISPATCH_RAW_HANDLER mode.
  ACPICA: Events: Enable APIs to allow interrupt/polling adaptive request based GPE handling model
  ...

299 files changed:
Documentation/acpi/enumeration.txt
Documentation/cpu-freq/intel-pstate.txt
Documentation/devicetree/bindings/devfreq/event/exynos-ppmu.txt [new file with mode: 0644]
Documentation/kernel-parameters.txt
Documentation/power/s2ram.txt
MAINTAINERS
arch/arm/kernel/bios32.c
arch/ia64/kernel/acpi-ext.c
arch/ia64/kernel/acpi.c
arch/ia64/pci/pci.c
arch/x86/Kconfig
arch/x86/include/asm/pci_x86.h
arch/x86/include/uapi/asm/msr-index.h
arch/x86/kernel/acpi/boot.c
arch/x86/pci/acpi.c
arch/x86/pci/bus_numa.c
arch/x86/pci/common.c
arch/x86/pci/intel_mid_pci.c
arch/x86/pci/irq.c
arch/x86/pci/mmconfig-shared.c
drivers/acpi/Kconfig
drivers/acpi/Makefile
drivers/acpi/acpi_apd.c [new file with mode: 0644]
drivers/acpi/acpi_lpss.c
drivers/acpi/acpi_memhotplug.c
drivers/acpi/acpi_platform.c
drivers/acpi/acpica/acapps.h
drivers/acpi/acpica/accommon.h
drivers/acpi/acpica/acdebug.h
drivers/acpi/acpica/acdispat.h
drivers/acpi/acpica/acevents.h
drivers/acpi/acpica/acglobal.h
drivers/acpi/acpica/achware.h
drivers/acpi/acpica/acinterp.h
drivers/acpi/acpica/aclocal.h
drivers/acpi/acpica/acmacros.h
drivers/acpi/acpica/acnamesp.h
drivers/acpi/acpica/acobject.h
drivers/acpi/acpica/acopcode.h
drivers/acpi/acpica/acparser.h
drivers/acpi/acpica/acpredef.h
drivers/acpi/acpica/acresrc.h
drivers/acpi/acpica/acstruct.h
drivers/acpi/acpica/actables.h
drivers/acpi/acpica/acutils.h
drivers/acpi/acpica/amlcode.h
drivers/acpi/acpica/amlresrc.h
drivers/acpi/acpica/dsargs.c
drivers/acpi/acpica/dscontrol.c
drivers/acpi/acpica/dsfield.c
drivers/acpi/acpica/dsinit.c
drivers/acpi/acpica/dsmethod.c
drivers/acpi/acpica/dsmthdat.c
drivers/acpi/acpica/dsobject.c
drivers/acpi/acpica/dsopcode.c
drivers/acpi/acpica/dsutils.c
drivers/acpi/acpica/dswexec.c
drivers/acpi/acpica/dswload.c
drivers/acpi/acpica/dswload2.c
drivers/acpi/acpica/dswscope.c
drivers/acpi/acpica/dswstate.c
drivers/acpi/acpica/evevent.c
drivers/acpi/acpica/evglock.c
drivers/acpi/acpica/evgpe.c
drivers/acpi/acpica/evgpeblk.c
drivers/acpi/acpica/evgpeinit.c
drivers/acpi/acpica/evgpeutil.c
drivers/acpi/acpica/evhandler.c
drivers/acpi/acpica/evmisc.c
drivers/acpi/acpica/evregion.c
drivers/acpi/acpica/evrgnini.c
drivers/acpi/acpica/evsci.c
drivers/acpi/acpica/evxface.c
drivers/acpi/acpica/evxfevnt.c
drivers/acpi/acpica/evxfgpe.c
drivers/acpi/acpica/evxfregn.c
drivers/acpi/acpica/exconfig.c
drivers/acpi/acpica/exconvrt.c
drivers/acpi/acpica/excreate.c
drivers/acpi/acpica/exdebug.c
drivers/acpi/acpica/exdump.c
drivers/acpi/acpica/exfield.c
drivers/acpi/acpica/exfldio.c
drivers/acpi/acpica/exmisc.c
drivers/acpi/acpica/exmutex.c
drivers/acpi/acpica/exnames.c
drivers/acpi/acpica/exoparg1.c
drivers/acpi/acpica/exoparg2.c
drivers/acpi/acpica/exoparg3.c
drivers/acpi/acpica/exoparg6.c
drivers/acpi/acpica/exprep.c
drivers/acpi/acpica/exregion.c
drivers/acpi/acpica/exresnte.c
drivers/acpi/acpica/exresolv.c
drivers/acpi/acpica/exresop.c
drivers/acpi/acpica/exstore.c
drivers/acpi/acpica/exstoren.c
drivers/acpi/acpica/exstorob.c
drivers/acpi/acpica/exsystem.c
drivers/acpi/acpica/exutils.c
drivers/acpi/acpica/hwacpi.c
drivers/acpi/acpica/hwesleep.c
drivers/acpi/acpica/hwgpe.c
drivers/acpi/acpica/hwpci.c
drivers/acpi/acpica/hwregs.c
drivers/acpi/acpica/hwsleep.c
drivers/acpi/acpica/hwtimer.c
drivers/acpi/acpica/hwvalid.c
drivers/acpi/acpica/hwxface.c
drivers/acpi/acpica/hwxfsleep.c
drivers/acpi/acpica/nsaccess.c
drivers/acpi/acpica/nsalloc.c
drivers/acpi/acpica/nsarguments.c
drivers/acpi/acpica/nsconvert.c
drivers/acpi/acpica/nsdump.c
drivers/acpi/acpica/nsdumpdv.c
drivers/acpi/acpica/nseval.c
drivers/acpi/acpica/nsinit.c
drivers/acpi/acpica/nsload.c
drivers/acpi/acpica/nsnames.c
drivers/acpi/acpica/nsobject.c
drivers/acpi/acpica/nsparse.c
drivers/acpi/acpica/nspredef.c
drivers/acpi/acpica/nsprepkg.c
drivers/acpi/acpica/nsrepair.c
drivers/acpi/acpica/nsrepair2.c
drivers/acpi/acpica/nssearch.c
drivers/acpi/acpica/nsutils.c
drivers/acpi/acpica/nswalk.c
drivers/acpi/acpica/nsxfeval.c
drivers/acpi/acpica/nsxfname.c
drivers/acpi/acpica/nsxfobj.c
drivers/acpi/acpica/psargs.c
drivers/acpi/acpica/psloop.c
drivers/acpi/acpica/psobject.c
drivers/acpi/acpica/psopcode.c
drivers/acpi/acpica/psopinfo.c
drivers/acpi/acpica/psparse.c
drivers/acpi/acpica/psscope.c
drivers/acpi/acpica/pstree.c
drivers/acpi/acpica/psutils.c
drivers/acpi/acpica/pswalk.c
drivers/acpi/acpica/psxface.c
drivers/acpi/acpica/rsaddr.c
drivers/acpi/acpica/rscalc.c
drivers/acpi/acpica/rscreate.c
drivers/acpi/acpica/rsdump.c
drivers/acpi/acpica/rsdumpinfo.c
drivers/acpi/acpica/rsinfo.c
drivers/acpi/acpica/rsio.c
drivers/acpi/acpica/rsirq.c
drivers/acpi/acpica/rslist.c
drivers/acpi/acpica/rsmemory.c
drivers/acpi/acpica/rsmisc.c
drivers/acpi/acpica/rsserial.c
drivers/acpi/acpica/rsutils.c
drivers/acpi/acpica/rsxface.c
drivers/acpi/acpica/tbdata.c
drivers/acpi/acpica/tbfadt.c
drivers/acpi/acpica/tbfind.c
drivers/acpi/acpica/tbinstal.c
drivers/acpi/acpica/tbprint.c
drivers/acpi/acpica/tbutils.c
drivers/acpi/acpica/tbxface.c
drivers/acpi/acpica/tbxfload.c
drivers/acpi/acpica/tbxfroot.c
drivers/acpi/acpica/utaddress.c
drivers/acpi/acpica/utalloc.c
drivers/acpi/acpica/utbuffer.c
drivers/acpi/acpica/utcache.c
drivers/acpi/acpica/utcopy.c
drivers/acpi/acpica/utdebug.c
drivers/acpi/acpica/utdecode.c
drivers/acpi/acpica/utdelete.c
drivers/acpi/acpica/uterror.c
drivers/acpi/acpica/uteval.c
drivers/acpi/acpica/utexcep.c
drivers/acpi/acpica/utfileio.c
drivers/acpi/acpica/utglobal.c
drivers/acpi/acpica/uthex.c
drivers/acpi/acpica/utids.c
drivers/acpi/acpica/utinit.c
drivers/acpi/acpica/utlock.c
drivers/acpi/acpica/utmath.c
drivers/acpi/acpica/utmisc.c
drivers/acpi/acpica/utmutex.c
drivers/acpi/acpica/utobject.c
drivers/acpi/acpica/utosi.c
drivers/acpi/acpica/utownerid.c
drivers/acpi/acpica/utpredef.c
drivers/acpi/acpica/utprint.c
drivers/acpi/acpica/utresrc.c
drivers/acpi/acpica/utstate.c
drivers/acpi/acpica/utstring.c
drivers/acpi/acpica/uttrack.c
drivers/acpi/acpica/utuuid.c
drivers/acpi/acpica/utxface.c
drivers/acpi/acpica/utxferror.c
drivers/acpi/acpica/utxfinit.c
drivers/acpi/acpica/utxfmutex.c
drivers/acpi/device_pm.c
drivers/acpi/ec.c
drivers/acpi/internal.h
drivers/acpi/ioapic.c [new file with mode: 0644]
drivers/acpi/numa.c
drivers/acpi/pci_irq.c
drivers/acpi/pci_root.c
drivers/acpi/processor_core.c
drivers/acpi/processor_idle.c
drivers/acpi/resource.c
drivers/acpi/scan.c
drivers/acpi/sleep.c
drivers/acpi/video.c
drivers/base/power/common.c
drivers/base/power/domain.c
drivers/base/power/opp.c
drivers/base/power/qos.c
drivers/char/hpet.c
drivers/cpufreq/Kconfig.x86
drivers/cpufreq/Makefile
drivers/cpufreq/cpufreq-dt.c
drivers/cpufreq/cpufreq.c
drivers/cpufreq/cpufreq_stats.c
drivers/cpufreq/intel_pstate.c
drivers/cpufreq/ls1x-cpufreq.c
drivers/cpufreq/sfi-cpufreq.c [new file with mode: 0644]
drivers/cpuidle/cpuidle-big_little.c
drivers/devfreq/Kconfig
drivers/devfreq/Makefile
drivers/devfreq/devfreq-event.c [new file with mode: 0644]
drivers/devfreq/event/Kconfig [new file with mode: 0644]
drivers/devfreq/event/Makefile [new file with mode: 0644]
drivers/devfreq/event/exynos-ppmu.c [new file with mode: 0644]
drivers/devfreq/event/exynos-ppmu.h [new file with mode: 0644]
drivers/devfreq/tegra-devfreq.c [new file with mode: 0644]
drivers/dma/acpi-dma.c
drivers/hv/vmbus_drv.c
drivers/mailbox/pcc.c
drivers/of/of_pci.c
drivers/pci/bus.c
drivers/pci/host-bridge.c
drivers/pci/host/pci-host-generic.c
drivers/pci/host/pci-versatile.c
drivers/pci/host/pci-xgene.c
drivers/pci/host/pcie-xilinx.c
drivers/pci/hotplug/sgi_hotplug.c
drivers/pci/pci-acpi.c
drivers/pci/pci-driver.c
drivers/pci/pci.c
drivers/pci/pci.h
drivers/pci/probe.c
drivers/pnp/pnpacpi/rsparser.c
drivers/sfi/sfi_core.c
drivers/usb/core/hub.c
drivers/xen/xen-acpi-memhotplug.c
include/acpi/acbuffer.h
include/acpi/acconfig.h
include/acpi/acexcep.h
include/acpi/acnames.h
include/acpi/acoutput.h
include/acpi/acpi.h
include/acpi/acpiosxf.h
include/acpi/acpixf.h
include/acpi/acrestyp.h
include/acpi/actbl.h
include/acpi/actbl1.h
include/acpi/actbl2.h
include/acpi/actbl3.h
include/acpi/actypes.h
include/acpi/platform/acenv.h
include/acpi/platform/acenvex.h
include/acpi/platform/acgcc.h
include/acpi/platform/aclinux.h
include/acpi/platform/aclinuxex.h
include/linux/acpi.h
include/linux/cpufreq.h
include/linux/devfreq-event.h [new file with mode: 0644]
include/linux/pci.h
include/linux/pm.h
include/linux/pm_domain.h
include/linux/resource_ext.h [new file with mode: 0644]
kernel/power/qos.c
kernel/power/snapshot.c
kernel/resource.c
kernel/trace/power-traces.c
tools/power/acpi/common/cmfsize.c
tools/power/acpi/common/getopt.c
tools/power/acpi/os_specific/service_layers/oslibcfs.c
tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
tools/power/acpi/os_specific/service_layers/osunixdir.c
tools/power/acpi/os_specific/service_layers/osunixmap.c
tools/power/acpi/os_specific/service_layers/osunixxf.c
tools/power/acpi/tools/acpidump/acpidump.h
tools/power/acpi/tools/acpidump/apdump.c
tools/power/acpi/tools/acpidump/apfiles.c
tools/power/acpi/tools/acpidump/apmain.c
tools/power/cpupower/Makefile
tools/power/x86/turbostat/turbostat.8
tools/power/x86/turbostat/turbostat.c

index b60d2ab69497bc66b47e752d1f7588b3cdbb93ab..9b121a569ab4397841ac9611aad90502fefb16b7 100644 (file)
@@ -243,7 +243,7 @@ input driver:
                        .owner  = THIS_MODULE,
                        .pm     = &mpu3050_pm,
                        .of_match_table = mpu3050_of_match,
-                       .acpi_match_table  ACPI_PTR(mpu3050_acpi_match),
+                       .acpi_match_table = ACPI_PTR(mpu3050_acpi_match),
                },
                .probe          = mpu3050_probe,
                .remove         = mpu3050_remove,
index 765d7fc0e692182a6c9952f4a023eed553883612..655750743fb09b3c78a8b5c47cc96ace7b311722 100644 (file)
@@ -37,6 +37,14 @@ controlling P state selection. These files have been added to
       no_turbo: limits the driver to selecting P states below the turbo
       frequency range.
 
+      turbo_pct: displays the percentage of the total performance that
+      is supported by hardware that is in the turbo range.  This number
+      is independent of whether turbo has been disabled or not.
+
+      num_pstates: displays the number of pstates that are supported
+      by hardware.  This number is independent of whether turbo has
+      been disabled or not.
+
 For contemporary Intel processors, the frequency is controlled by the
 processor itself and the P-states exposed to software are related to
 performance levels.  The idea that frequency can be set to a single
diff --git a/Documentation/devicetree/bindings/devfreq/event/exynos-ppmu.txt b/Documentation/devicetree/bindings/devfreq/event/exynos-ppmu.txt
new file mode 100644 (file)
index 0000000..b54bf3a
--- /dev/null
@@ -0,0 +1,110 @@
+
+* Samsung Exynos PPMU (Platform Performance Monitoring Unit) device
+
+The Samsung Exynos SoC has PPMU (Platform Performance Monitoring Unit) for
+each IP. PPMU provides the primitive values to get performance data. These
+PPMU events provide information of the SoC's behaviors so that you may
+use to analyze system performance, to make behaviors visible and to count
+usages of each IP (DMC, CPU, RIGHTBUS, LEFTBUS, CAM interface, LCD, G3D, MFC).
+The Exynos PPMU driver uses the devfreq-event class to provide event data
+to various devfreq devices. The devfreq devices would use the event data when
+derterming the current state of each IP.
+
+Required properties:
+- compatible: Should be "samsung,exynos-ppmu".
+- reg: physical base address of each PPMU and length of memory mapped region.
+
+Optional properties:
+- clock-names : the name of clock used by the PPMU, "ppmu"
+- clocks : phandles for clock specified in "clock-names" property
+- #clock-cells: should be 1.
+
+Example1 : PPMU nodes in exynos3250.dtsi are listed below.
+
+               ppmu_dmc0: ppmu_dmc0@106a0000 {
+                       compatible = "samsung,exynos-ppmu";
+                       reg = <0x106a0000 0x2000>;
+                       status = "disabled";
+               };
+
+               ppmu_dmc1: ppmu_dmc1@106b0000 {
+                       compatible = "samsung,exynos-ppmu";
+                       reg = <0x106b0000 0x2000>;
+                       status = "disabled";
+               };
+
+               ppmu_cpu: ppmu_cpu@106c0000 {
+                       compatible = "samsung,exynos-ppmu";
+                       reg = <0x106c0000 0x2000>;
+                       status = "disabled";
+               };
+
+               ppmu_rightbus: ppmu_rightbus@112a0000 {
+                       compatible = "samsung,exynos-ppmu";
+                       reg = <0x112a0000 0x2000>;
+                       clocks = <&cmu CLK_PPMURIGHT>;
+                       clock-names = "ppmu";
+                       status = "disabled";
+               };
+
+               ppmu_leftbus: ppmu_leftbus0@116a0000 {
+                       compatible = "samsung,exynos-ppmu";
+                       reg = <0x116a0000 0x2000>;
+                       clocks = <&cmu CLK_PPMULEFT>;
+                       clock-names = "ppmu";
+                       status = "disabled";
+               };
+
+Example2 : Events of each PPMU node in exynos3250-rinato.dts are listed below.
+
+       &ppmu_dmc0 {
+               status = "okay";
+
+               events {
+                       ppmu_dmc0_3: ppmu-event3-dmc0 {
+                               event-name = "ppmu-event3-dmc0";
+                       };
+
+                       ppmu_dmc0_2: ppmu-event2-dmc0 {
+                               event-name = "ppmu-event2-dmc0";
+                       };
+
+                       ppmu_dmc0_1: ppmu-event1-dmc0 {
+                               event-name = "ppmu-event1-dmc0";
+                       };
+
+                       ppmu_dmc0_0: ppmu-event0-dmc0 {
+                               event-name = "ppmu-event0-dmc0";
+                       };
+               };
+       };
+
+       &ppmu_dmc1 {
+               status = "okay";
+
+               events {
+                       ppmu_dmc1_3: ppmu-event3-dmc1 {
+                               event-name = "ppmu-event3-dmc1";
+                       };
+               };
+       };
+
+       &ppmu_leftbus {
+               status = "okay";
+
+               events {
+                       ppmu_leftbus_3: ppmu-event3-leftbus {
+                               event-name = "ppmu-event3-leftbus";
+                       };
+               };
+       };
+
+       &ppmu_rightbus {
+               status = "okay";
+
+               events {
+                       ppmu_rightbus_3: ppmu-event3-rightbus {
+                               event-name = "ppmu-event3-rightbus";
+                       };
+               };
+       };
index 176d4fe4f076be0c785de56ade5cbca399e591e2..f06f1f609cb70713c7886fd9d2b86a3e7e633217 100644 (file)
@@ -1470,6 +1470,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
                       no_hwp
                         Do not enable hardware P state control (HWP)
                         if available.
+               hwp_only
+                       Only load intel_pstate on systems which support
+                       hardware P state control (HWP) if available.
 
        intremap=       [X86-64, Intel-IOMMU]
                        on      enable Interrupt Remapping (default)
index 1bdfa04437732753636e3b6adfc84d7e2626adab..4685aee197fdefdedc9f7a19174b458f58b92f95 100644 (file)
@@ -69,6 +69,10 @@ Reason for this is that the RTC is the only reliably available piece of
 hardware during resume operations where a value can be set that will
 survive a reboot.
 
+pm_trace is not compatible with asynchronous suspend, so it turns
+asynchronous suspend off (which may work around timing or
+ordering-sensitive bugs).
+
 Consequence is that after a resume (even if it is successful) your system
 clock will have a value corresponding to the magic number instead of the
 correct date/time! It is therefore advisable to use a program like ntp-date
index b1ddeb69cd9f89e2950886b339f40287dec2f969..54c7ce00d85fc4b549815d968dd973487ed68d17 100644 (file)
@@ -270,12 +270,12 @@ F:        drivers/acpi/
 F:     drivers/pnp/pnpacpi/
 F:     include/linux/acpi.h
 F:     include/acpi/
-F:     Documentation/acpi
+F:     Documentation/acpi/
 F:     Documentation/ABI/testing/sysfs-bus-acpi
 F:     drivers/pci/*acpi*
 F:     drivers/pci/*/*acpi*
 F:     drivers/pci/*/*/*acpi*
-F:     tools/power/acpi
+F:     tools/power/acpi/
 
 ACPI COMPONENT ARCHITECTURE (ACPICA)
 M:     Robert Moore <robert.moore@intel.com>
index ddd75c58b1e8f3f2a27e407b95ac040c54b79dd2..ab19b7c03423690adac07fd99e525a5f4ac3f359 100644 (file)
@@ -422,17 +422,16 @@ static int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
 static int pcibios_init_resources(int busnr, struct pci_sys_data *sys)
 {
        int ret;
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
 
        if (list_empty(&sys->resources)) {
                pci_add_resource_offset(&sys->resources,
                         &iomem_resource, sys->mem_offset);
        }
 
-       list_for_each_entry(window, &sys->resources, list) {
+       resource_list_for_each_entry(window, &sys->resources)
                if (resource_type(window->res) == IORESOURCE_IO)
                        return 0;
-       }
 
        sys->io_res.start = (busnr * SZ_64K) ?  : pcibios_min_io;
        sys->io_res.end = (busnr + 1) * SZ_64K - 1;
index 8b9318d311a0b81897d9acc84d1a630494de457d..bd09bf74f187b243de3e3bd6fefa214ff551436c 100644 (file)
@@ -69,10 +69,10 @@ static acpi_status find_csr_space(struct acpi_resource *resource, void *data)
        status = acpi_resource_to_address64(resource, &addr);
        if (ACPI_SUCCESS(status) &&
            addr.resource_type == ACPI_MEMORY_RANGE &&
-           addr.address_length &&
+           addr.address.address_length &&
            addr.producer_consumer == ACPI_CONSUMER) {
-               space->base = addr.minimum;
-               space->length = addr.address_length;
+               space->base = addr.address.minimum;
+               space->length = addr.address.address_length;
                return AE_CTRL_TERMINATE;
        }
        return AE_OK;           /* keep looking */
index e795cb848154a451bf12c83ce221123686b238b5..2c4498919d3c2bae493977ed922234117e3cc225 100644 (file)
@@ -380,9 +380,6 @@ static void __init acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 
 static int __init acpi_parse_madt(struct acpi_table_header *table)
 {
-       if (!table)
-               return -EINVAL;
-
        acpi_madt = (struct acpi_table_madt *)table;
 
        acpi_madt_rev = acpi_madt->header.revision;
@@ -645,9 +642,6 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
        struct acpi_table_header *fadt_header;
        struct acpi_table_fadt *fadt;
 
-       if (!table)
-               return -EINVAL;
-
        fadt_header = (struct acpi_table_header *)table;
        if (fadt_header->revision != 3)
                return -ENODEV; /* Only deal with ACPI 2.0 FADT */
index 900cc93e540961903816fd8d69769a3311fdbbf3..48cc65705db467a5d38d1353a452de0ef8f74761 100644 (file)
@@ -188,12 +188,12 @@ static u64 add_io_space(struct pci_root_info *info,
 
        name = (char *)(iospace + 1);
 
-       min = addr->minimum;
-       max = min + addr->address_length - 1;
+       min = addr->address.minimum;
+       max = min + addr->address.address_length - 1;
        if (addr->info.io.translation_type == ACPI_SPARSE_TRANSLATION)
                sparse = 1;
 
-       space_nr = new_space(addr->translation_offset, sparse);
+       space_nr = new_space(addr->address.translation_offset, sparse);
        if (space_nr == ~0)
                goto free_resource;
 
@@ -247,7 +247,7 @@ static acpi_status resource_to_window(struct acpi_resource *resource,
        if (ACPI_SUCCESS(status) &&
            (addr->resource_type == ACPI_MEMORY_RANGE ||
             addr->resource_type == ACPI_IO_RANGE) &&
-           addr->address_length &&
+           addr->address.address_length &&
            addr->producer_consumer == ACPI_PRODUCER)
                return AE_OK;
 
@@ -284,7 +284,7 @@ static acpi_status add_window(struct acpi_resource *res, void *data)
        if (addr.resource_type == ACPI_MEMORY_RANGE) {
                flags = IORESOURCE_MEM;
                root = &iomem_resource;
-               offset = addr.translation_offset;
+               offset = addr.address.translation_offset;
        } else if (addr.resource_type == ACPI_IO_RANGE) {
                flags = IORESOURCE_IO;
                root = &ioport_resource;
@@ -297,8 +297,8 @@ static acpi_status add_window(struct acpi_resource *res, void *data)
        resource = &info->res[info->res_num];
        resource->name = info->name;
        resource->flags = flags;
-       resource->start = addr.minimum + offset;
-       resource->end = resource->start + addr.address_length - 1;
+       resource->start = addr.address.minimum + offset;
+       resource->end = resource->start + addr.address.address_length - 1;
        info->res_offset[info->res_num] = offset;
 
        if (insert_resource(root, resource)) {
index 5e28e2be3a41d6234ed18de61cff9d6767050b01..019f4e5c2b75ea3f5cd0e864604b3cf9dd2139ef 100644 (file)
@@ -497,6 +497,17 @@ config X86_INTEL_LPSS
          things like clock tree (common clock framework) and pincontrol
          which are needed by the LPSS peripheral drivers.
 
+config X86_AMD_PLATFORM_DEVICE
+       bool "AMD ACPI2Platform devices support"
+       depends on ACPI
+       select COMMON_CLK
+       select PINCTRL
+       ---help---
+         Select to interpret AMD specific ACPI device to platform device
+         such as I2C, UART, GPIO found on AMD Carrizo and later chipsets.
+         I2C and UART depend on COMMON_CLK to set clock. GPIO driver is
+         implemented under PINCTRL subsystem.
+
 config IOSF_MBI
        tristate "Intel SoC IOSF Sideband support for SoC platforms"
        depends on PCI
index 164e3f8d3c3dbb6eb4fc0ea60e01cae15fe5116e..fa1195dae42541aaa1d836782a3a65aa25640e74 100644 (file)
@@ -93,8 +93,6 @@ extern raw_spinlock_t pci_config_lock;
 extern int (*pcibios_enable_irq)(struct pci_dev *dev);
 extern void (*pcibios_disable_irq)(struct pci_dev *dev);
 
-extern bool mp_should_keep_irq(struct device *dev);
-
 struct pci_raw_ops {
        int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn,
                                                int reg, int len, u32 *val);
index d979e5abae5510400ee0a8d0257a06a4cd9302ba..536240fa9a95a4d6c7d345cbb7faa7b822ed5e74 100644 (file)
 #define MSR_CC6_DEMOTION_POLICY_CONFIG 0x00000668
 #define MSR_MC6_DEMOTION_POLICY_CONFIG 0x00000669
 
+#define MSR_CORE_PERF_LIMIT_REASONS    0x00000690
+#define MSR_GFX_PERF_LIMIT_REASONS     0x000006B0
+#define MSR_RING_PERF_LIMIT_REASONS    0x000006B1
+
 /* Hardware P state interface */
 #define MSR_PPERF                      0x0000064e
 #define MSR_PERF_LIMIT_REASONS         0x0000064f
 
 #define MSR_IA32_PERF_STATUS           0x00000198
 #define MSR_IA32_PERF_CTL              0x00000199
+#define INTEL_PERF_CTL_MASK            0xffff
 #define MSR_AMD_PSTATE_DEF_BASE                0xc0010064
 #define MSR_AMD_PERF_STATUS            0xc0010063
 #define MSR_AMD_PERF_CTL               0xc0010062
index a18fff361c7f47113359de453625feeb036a2e64..ae97ed0873c6e3f35e28545b521dda9c25c8d74e 100644 (file)
@@ -845,13 +845,7 @@ int acpi_ioapic_registered(acpi_handle handle, u32 gsi_base)
 
 static int __init acpi_parse_sbf(struct acpi_table_header *table)
 {
-       struct acpi_table_boot *sb;
-
-       sb = (struct acpi_table_boot *)table;
-       if (!sb) {
-               printk(KERN_WARNING PREFIX "Unable to map SBF\n");
-               return -ENODEV;
-       }
+       struct acpi_table_boot *sb = (struct acpi_table_boot *)table;
 
        sbf_port = sb->cmos_index;      /* Save CMOS port */
 
@@ -865,13 +859,7 @@ static struct resource *hpet_res __initdata;
 
 static int __init acpi_parse_hpet(struct acpi_table_header *table)
 {
-       struct acpi_table_hpet *hpet_tbl;
-
-       hpet_tbl = (struct acpi_table_hpet *)table;
-       if (!hpet_tbl) {
-               printk(KERN_WARNING PREFIX "Unable to map HPET\n");
-               return -ENODEV;
-       }
+       struct acpi_table_hpet *hpet_tbl = (struct acpi_table_hpet *)table;
 
        if (hpet_tbl->address.space_id != ACPI_SPACE_MEM) {
                printk(KERN_WARNING PREFIX "HPET timers must be located in "
index cfd1b132b8e3ed4023ceecf12dbb68fc35bb68a3..6ac273832f284635ac1a66bf3f8551de379fa0f0 100644 (file)
@@ -10,9 +10,6 @@
 struct pci_root_info {
        struct acpi_device *bridge;
        char name[16];
-       unsigned int res_num;
-       struct resource *res;
-       resource_size_t *res_offset;
        struct pci_sysdata sd;
 #ifdef CONFIG_PCI_MMCONFIG
        bool mcfg_added;
@@ -218,130 +215,41 @@ static void teardown_mcfg_map(struct pci_root_info *info)
 }
 #endif
 
-static acpi_status resource_to_addr(struct acpi_resource *resource,
-                                   struct acpi_resource_address64 *addr)
-{
-       acpi_status status;
-       struct acpi_resource_memory24 *memory24;
-       struct acpi_resource_memory32 *memory32;
-       struct acpi_resource_fixed_memory32 *fixed_memory32;
-
-       memset(addr, 0, sizeof(*addr));
-       switch (resource->type) {
-       case ACPI_RESOURCE_TYPE_MEMORY24:
-               memory24 = &resource->data.memory24;
-               addr->resource_type = ACPI_MEMORY_RANGE;
-               addr->minimum = memory24->minimum;
-               addr->address_length = memory24->address_length;
-               addr->maximum = addr->minimum + addr->address_length - 1;
-               return AE_OK;
-       case ACPI_RESOURCE_TYPE_MEMORY32:
-               memory32 = &resource->data.memory32;
-               addr->resource_type = ACPI_MEMORY_RANGE;
-               addr->minimum = memory32->minimum;
-               addr->address_length = memory32->address_length;
-               addr->maximum = addr->minimum + addr->address_length - 1;
-               return AE_OK;
-       case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
-               fixed_memory32 = &resource->data.fixed_memory32;
-               addr->resource_type = ACPI_MEMORY_RANGE;
-               addr->minimum = fixed_memory32->address;
-               addr->address_length = fixed_memory32->address_length;
-               addr->maximum = addr->minimum + addr->address_length - 1;
-               return AE_OK;
-       case ACPI_RESOURCE_TYPE_ADDRESS16:
-       case ACPI_RESOURCE_TYPE_ADDRESS32:
-       case ACPI_RESOURCE_TYPE_ADDRESS64:
-               status = acpi_resource_to_address64(resource, addr);
-               if (ACPI_SUCCESS(status) &&
-                   (addr->resource_type == ACPI_MEMORY_RANGE ||
-                   addr->resource_type == ACPI_IO_RANGE) &&
-                   addr->address_length > 0) {
-                       return AE_OK;
-               }
-               break;
-       }
-       return AE_ERROR;
-}
-
-static acpi_status count_resource(struct acpi_resource *acpi_res, void *data)
+static void validate_resources(struct device *dev, struct list_head *crs_res,
+                              unsigned long type)
 {
-       struct pci_root_info *info = data;
-       struct acpi_resource_address64 addr;
-       acpi_status status;
-
-       status = resource_to_addr(acpi_res, &addr);
-       if (ACPI_SUCCESS(status))
-               info->res_num++;
-       return AE_OK;
-}
-
-static acpi_status setup_resource(struct acpi_resource *acpi_res, void *data)
-{
-       struct pci_root_info *info = data;
-       struct resource *res;
-       struct acpi_resource_address64 addr;
-       acpi_status status;
-       unsigned long flags;
-       u64 start, orig_end, end;
-
-       status = resource_to_addr(acpi_res, &addr);
-       if (!ACPI_SUCCESS(status))
-               return AE_OK;
-
-       if (addr.resource_type == ACPI_MEMORY_RANGE) {
-               flags = IORESOURCE_MEM;
-               if (addr.info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
-                       flags |= IORESOURCE_PREFETCH;
-       } else if (addr.resource_type == ACPI_IO_RANGE) {
-               flags = IORESOURCE_IO;
-       } else
-               return AE_OK;
-
-       start = addr.minimum + addr.translation_offset;
-       orig_end = end = addr.maximum + addr.translation_offset;
-
-       /* Exclude non-addressable range or non-addressable portion of range */
-       end = min(end, (u64)iomem_resource.end);
-       if (end <= start) {
-               dev_info(&info->bridge->dev,
-                       "host bridge window [%#llx-%#llx] "
-                       "(ignored, not CPU addressable)\n", start, orig_end);
-               return AE_OK;
-       } else if (orig_end != end) {
-               dev_info(&info->bridge->dev,
-                       "host bridge window [%#llx-%#llx] "
-                       "([%#llx-%#llx] ignored, not CPU addressable)\n", 
-                       start, orig_end, end + 1, orig_end);
-       }
+       LIST_HEAD(list);
+       struct resource *res1, *res2, *root = NULL;
+       struct resource_entry *tmp, *entry, *entry2;
 
-       res = &info->res[info->res_num];
-       res->name = info->name;
-       res->flags = flags;
-       res->start = start;
-       res->end = end;
-       info->res_offset[info->res_num] = addr.translation_offset;
-       info->res_num++;
+       BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0);
+       root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource;
 
-       if (!pci_use_crs)
-               dev_printk(KERN_DEBUG, &info->bridge->dev,
-                          "host bridge window %pR (ignored)\n", res);
+       list_splice_init(crs_res, &list);
+       resource_list_for_each_entry_safe(entry, tmp, &list) {
+               bool free = false;
+               resource_size_t end;
 
-       return AE_OK;
-}
-
-static void coalesce_windows(struct pci_root_info *info, unsigned long type)
-{
-       int i, j;
-       struct resource *res1, *res2;
-
-       for (i = 0; i < info->res_num; i++) {
-               res1 = &info->res[i];
+               res1 = entry->res;
                if (!(res1->flags & type))
-                       continue;
+                       goto next;
+
+               /* Exclude non-addressable range or non-addressable portion */
+               end = min(res1->end, root->end);
+               if (end <= res1->start) {
+                       dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n",
+                                res1);
+                       free = true;
+                       goto next;
+               } else if (res1->end != end) {
+                       dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n",
+                                res1, (unsigned long long)end + 1,
+                                (unsigned long long)res1->end);
+                       res1->end = end;
+               }
 
-               for (j = i + 1; j < info->res_num; j++) {
-                       res2 = &info->res[j];
+               resource_list_for_each_entry(entry2, crs_res) {
+                       res2 = entry2->res;
                        if (!(res2->flags & type))
                                continue;
 
@@ -353,118 +261,92 @@ static void coalesce_windows(struct pci_root_info *info, unsigned long type)
                        if (resource_overlaps(res1, res2)) {
                                res2->start = min(res1->start, res2->start);
                                res2->end = max(res1->end, res2->end);
-                               dev_info(&info->bridge->dev,
-                                        "host bridge window expanded to %pR; %pR ignored\n",
+                               dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n",
                                         res2, res1);
-                               res1->flags = 0;
+                               free = true;
+                               goto next;
                        }
                }
+
+next:
+               resource_list_del(entry);
+               if (free)
+                       resource_list_free_entry(entry);
+               else
+                       resource_list_add_tail(entry, crs_res);
        }
 }
 
 static void add_resources(struct pci_root_info *info,
-                         struct list_head *resources)
+                         struct list_head *resources,
+                         struct list_head *crs_res)
 {
-       int i;
-       struct resource *res, *root, *conflict;
-
-       coalesce_windows(info, IORESOURCE_MEM);
-       coalesce_windows(info, IORESOURCE_IO);
+       struct resource_entry *entry, *tmp;
+       struct resource *res, *conflict, *root = NULL;
 
-       for (i = 0; i < info->res_num; i++) {
-               res = &info->res[i];
+       validate_resources(&info->bridge->dev, crs_res, IORESOURCE_MEM);
+       validate_resources(&info->bridge->dev, crs_res, IORESOURCE_IO);
 
+       resource_list_for_each_entry_safe(entry, tmp, crs_res) {
+               res = entry->res;
                if (res->flags & IORESOURCE_MEM)
                        root = &iomem_resource;
                else if (res->flags & IORESOURCE_IO)
                        root = &ioport_resource;
                else
-                       continue;
+                       BUG_ON(res);
 
                conflict = insert_resource_conflict(root, res);
-               if (conflict)
+               if (conflict) {
                        dev_info(&info->bridge->dev,
                                 "ignoring host bridge window %pR (conflicts with %s %pR)\n",
                                 res, conflict->name, conflict);
-               else
-                       pci_add_resource_offset(resources, res,
-                                       info->res_offset[i]);
+                       resource_list_destroy_entry(entry);
+               }
        }
-}
 
-static void free_pci_root_info_res(struct pci_root_info *info)
-{
-       kfree(info->res);
-       info->res = NULL;
-       kfree(info->res_offset);
-       info->res_offset = NULL;
-       info->res_num = 0;
+       list_splice_tail(crs_res, resources);
 }
 
-static void __release_pci_root_info(struct pci_root_info *info)
+static void release_pci_root_info(struct pci_host_bridge *bridge)
 {
-       int i;
        struct resource *res;
+       struct resource_entry *entry;
+       struct pci_root_info *info = bridge->release_data;
 
-       for (i = 0; i < info->res_num; i++) {
-               res = &info->res[i];
-
-               if (!res->parent)
-                       continue;
-
-               if (!(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
-                       continue;
-
-               release_resource(res);
+       resource_list_for_each_entry(entry, &bridge->windows) {
+               res = entry->res;
+               if (res->parent &&
+                   (res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
+                       release_resource(res);
        }
 
-       free_pci_root_info_res(info);
-
        teardown_mcfg_map(info);
-
        kfree(info);
 }
 
-static void release_pci_root_info(struct pci_host_bridge *bridge)
-{
-       struct pci_root_info *info = bridge->release_data;
-
-       __release_pci_root_info(info);
-}
-
 static void probe_pci_root_info(struct pci_root_info *info,
                                struct acpi_device *device,
-                               int busnum, int domain)
+                               int busnum, int domain,
+                               struct list_head *list)
 {
-       size_t size;
+       int ret;
+       struct resource_entry *entry;
 
        sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum);
        info->bridge = device;
-
-       info->res_num = 0;
-       acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_resource,
-                               info);
-       if (!info->res_num)
-               return;
-
-       size = sizeof(*info->res) * info->res_num;
-       info->res = kzalloc_node(size, GFP_KERNEL, info->sd.node);
-       if (!info->res) {
-               info->res_num = 0;
-               return;
-       }
-
-       size = sizeof(*info->res_offset) * info->res_num;
-       info->res_num = 0;
-       info->res_offset = kzalloc_node(size, GFP_KERNEL, info->sd.node);
-       if (!info->res_offset) {
-               kfree(info->res);
-               info->res = NULL;
-               return;
-       }
-
-       acpi_walk_resources(device->handle, METHOD_NAME__CRS, setup_resource,
-                               info);
+       ret = acpi_dev_get_resources(device, list,
+                                    acpi_dev_filter_resource_type_cb,
+                                    (void *)(IORESOURCE_IO | IORESOURCE_MEM));
+       if (ret < 0)
+               dev_warn(&device->dev,
+                        "failed to parse _CRS method, error code %d\n", ret);
+       else if (ret == 0)
+               dev_dbg(&device->dev,
+                       "no IO and memory resources present in _CRS\n");
+       else
+               resource_list_for_each_entry(entry, list)
+                       entry->res->name = info->name;
 }
 
 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
@@ -473,6 +355,8 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
        struct pci_root_info *info;
        int domain = root->segment;
        int busnum = root->secondary.start;
+       struct resource_entry *res_entry;
+       LIST_HEAD(crs_res);
        LIST_HEAD(resources);
        struct pci_bus *bus;
        struct pci_sysdata *sd;
@@ -520,18 +404,22 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
                memcpy(bus->sysdata, sd, sizeof(*sd));
                kfree(info);
        } else {
-               probe_pci_root_info(info, device, busnum, domain);
-
                /* insert busn res at first */
                pci_add_resource(&resources,  &root->secondary);
+
                /*
                 * _CRS with no apertures is normal, so only fall back to
                 * defaults or native bridge info if we're ignoring _CRS.
                 */
-               if (pci_use_crs)
-                       add_resources(info, &resources);
-               else {
-                       free_pci_root_info_res(info);
+               probe_pci_root_info(info, device, busnum, domain, &crs_res);
+               if (pci_use_crs) {
+                       add_resources(info, &resources, &crs_res);
+               } else {
+                       resource_list_for_each_entry(res_entry, &crs_res)
+                               dev_printk(KERN_DEBUG, &device->dev,
+                                          "host bridge window %pR (ignored)\n",
+                                          res_entry->res);
+                       resource_list_free(&crs_res);
                        x86_pci_root_bus_resources(busnum, &resources);
                }
 
@@ -546,8 +434,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
                                to_pci_host_bridge(bus->bridge),
                                release_pci_root_info, info);
                } else {
-                       pci_free_resource_list(&resources);
-                       __release_pci_root_info(info);
+                       resource_list_free(&resources);
+                       teardown_mcfg_map(info);
+                       kfree(info);
                }
        }
 
index f3a2cfc14125bbb387d7eae474304a59cf8b2ef1..7bcf06a7cd12069e9e4f5c2f13066e1551ba8a27 100644 (file)
@@ -31,7 +31,7 @@ void x86_pci_root_bus_resources(int bus, struct list_head *resources)
 {
        struct pci_root_info *info = x86_find_pci_root_info(bus);
        struct pci_root_res *root_res;
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        bool found = false;
 
        if (!info)
@@ -41,7 +41,7 @@ void x86_pci_root_bus_resources(int bus, struct list_head *resources)
               bus);
 
        /* already added by acpi ? */
-       list_for_each_entry(window, resources, list)
+       resource_list_for_each_entry(window, resources)
                if (window->res->flags & IORESOURCE_BUS) {
                        found = true;
                        break;
index 2fb384724ebb52d1cf0ba6131b418e81609ca55c..3d2612b68694efd294ca214d478c7777030f4b03 100644 (file)
@@ -513,6 +513,31 @@ void __init pcibios_set_cache_line_size(void)
        }
 }
 
+/*
+ * Some device drivers assume dev->irq won't change after calling
+ * pci_disable_device(). So delay releasing of IRQ resource to driver
+ * unbinding time. Otherwise it will break PM subsystem and drivers
+ * like xen-pciback etc.
+ */
+static int pci_irq_notifier(struct notifier_block *nb, unsigned long action,
+                           void *data)
+{
+       struct pci_dev *dev = to_pci_dev(data);
+
+       if (action != BUS_NOTIFY_UNBOUND_DRIVER)
+               return NOTIFY_DONE;
+
+       if (pcibios_disable_irq)
+               pcibios_disable_irq(dev);
+
+       return NOTIFY_OK;
+}
+
+static struct notifier_block pci_irq_nb = {
+       .notifier_call = pci_irq_notifier,
+       .priority = INT_MIN,
+};
+
 int __init pcibios_init(void)
 {
        if (!raw_pci_ops) {
@@ -525,6 +550,9 @@ int __init pcibios_init(void)
 
        if (pci_bf_sort >= pci_force_bf)
                pci_sort_breadthfirst();
+
+       bus_register_notifier(&pci_bus_type, &pci_irq_nb);
+
        return 0;
 }
 
@@ -683,12 +711,6 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
        return 0;
 }
 
-void pcibios_disable_device (struct pci_dev *dev)
-{
-       if (!pci_dev_msi_enabled(dev) && pcibios_disable_irq)
-               pcibios_disable_irq(dev);
-}
-
 int pci_ext_cfg_avail(void)
 {
        if (raw_pci_ext_ops)
index 852aa4c92da027cb07fb64c77c855aaf0877a1da..efb849323c745899f5e665364ceec36a1d15cb9d 100644 (file)
@@ -234,10 +234,10 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev)
 
 static void intel_mid_pci_irq_disable(struct pci_dev *dev)
 {
-       if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed &&
-           dev->irq > 0) {
+       if (dev->irq_managed && dev->irq > 0) {
                mp_unmap_irq(dev->irq);
                dev->irq_managed = 0;
+               dev->irq = 0;
        }
 }
 
index 5dc6ca5e174131d2c7208ea1ed86739ef4532d22..e71b3dbd87b8f688d3f2cbfa995421bd516ba581 100644 (file)
@@ -1256,22 +1256,9 @@ static int pirq_enable_irq(struct pci_dev *dev)
        return 0;
 }
 
-bool mp_should_keep_irq(struct device *dev)
-{
-       if (dev->power.is_prepared)
-               return true;
-#ifdef CONFIG_PM
-       if (dev->power.runtime_status == RPM_SUSPENDING)
-               return true;
-#endif
-
-       return false;
-}
-
 static void pirq_disable_irq(struct pci_dev *dev)
 {
-       if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) &&
-           dev->irq_managed && dev->irq) {
+       if (io_apic_assign_pci_irqs && dev->irq_managed && dev->irq) {
                mp_unmap_irq(dev->irq);
                dev->irq = 0;
                dev->irq_managed = 0;
index 676e5e04e4d426ea54bd30e4ad4893b54a52cff0..dd30b7e08bc2c8e703c1f91f7f8b60894d36f6eb 100644 (file)
@@ -397,12 +397,12 @@ static acpi_status check_mcfg_resource(struct acpi_resource *res, void *data)
 
        status = acpi_resource_to_address64(res, &address);
        if (ACPI_FAILURE(status) ||
-          (address.address_length <= 0) ||
+          (address.address.address_length <= 0) ||
           (address.resource_type != ACPI_MEMORY_RANGE))
                return AE_OK;
 
-       if ((mcfg_res->start >= address.minimum) &&
-           (mcfg_res->end < (address.minimum + address.address_length))) {
+       if ((mcfg_res->start >= address.address.minimum) &&
+           (mcfg_res->end < (address.address.minimum + address.address.address_length))) {
                mcfg_res->flags = 1;
                return AE_CTRL_TERMINATE;
        }
index 8951cefb0a965834be5a814a2a23e15ce7177c72..e6c3ddd926658cd615241328e1f51b90ce16c611 100644 (file)
@@ -315,6 +315,12 @@ config ACPI_HOTPLUG_MEMORY
          To compile this driver as a module, choose M here:
          the module will be called acpi_memhotplug.
 
+config ACPI_HOTPLUG_IOAPIC
+       bool
+       depends on PCI
+       depends on X86_IO_APIC
+       default y
+
 config ACPI_SBS
        tristate "Smart Battery System"
        depends on X86
index f74317cc1ca9343adfa20744e5d1d84cc4947a84..b18cd2151ddb244e1961c48165ab973d88880d71 100644 (file)
@@ -40,7 +40,7 @@ acpi-$(CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC) += processor_pdc.o
 acpi-y                         += ec.o
 acpi-$(CONFIG_ACPI_DOCK)       += dock.o
 acpi-y                         += pci_root.o pci_link.o pci_irq.o
-acpi-y                         += acpi_lpss.o
+acpi-y                         += acpi_lpss.o acpi_apd.o
 acpi-y                         += acpi_platform.o
 acpi-y                         += acpi_pnp.o
 acpi-y                         += int340x_thermal.o
@@ -70,6 +70,7 @@ obj-$(CONFIG_ACPI_PROCESSOR)  += processor.o
 obj-y                          += container.o
 obj-$(CONFIG_ACPI_THERMAL)     += thermal.o
 obj-y                          += acpi_memhotplug.o
+obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o
 obj-$(CONFIG_ACPI_BATTERY)     += battery.o
 obj-$(CONFIG_ACPI_SBS)         += sbshc.o
 obj-$(CONFIG_ACPI_SBS)         += sbs.o
diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
new file mode 100644 (file)
index 0000000..3984ea9
--- /dev/null
@@ -0,0 +1,150 @@
+/*
+ * AMD ACPI support for ACPI2platform device.
+ *
+ * Copyright (c) 2014,2015 AMD Corporation.
+ * Authors: Ken Xue <Ken.Xue@amd.com>
+ *     Wu, Jeff <Jeff.Wu@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/clkdev.h>
+#include <linux/acpi.h>
+#include <linux/err.h>
+#include <linux/clk.h>
+#include <linux/pm.h>
+
+#include "internal.h"
+
+ACPI_MODULE_NAME("acpi_apd");
+struct apd_private_data;
+
+/**
+ * ACPI_APD_SYSFS : add device attributes in sysfs
+ * ACPI_APD_PM : attach power domain to device
+ */
+#define ACPI_APD_SYSFS BIT(0)
+#define ACPI_APD_PM    BIT(1)
+
+/**
+ * struct apd_device_desc - a descriptor for apd device
+ * @flags: device flags like %ACPI_APD_SYSFS, %ACPI_APD_PM
+ * @fixed_clk_rate: fixed rate input clock source for acpi device;
+ *                     0 means no fixed rate input clock source
+ * @setup: a hook routine to set device resource during create platform device
+ *
+ * Device description defined as acpi_device_id.driver_data
+ */
+struct apd_device_desc {
+       unsigned int flags;
+       unsigned int fixed_clk_rate;
+       int (*setup)(struct apd_private_data *pdata);
+};
+
+struct apd_private_data {
+       struct clk *clk;
+       struct acpi_device *adev;
+       const struct apd_device_desc *dev_desc;
+};
+
+#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
+#define APD_ADDR(desc) ((unsigned long)&desc)
+
+static int acpi_apd_setup(struct apd_private_data *pdata)
+{
+       const struct apd_device_desc *dev_desc = pdata->dev_desc;
+       struct clk *clk = ERR_PTR(-ENODEV);
+
+       if (dev_desc->fixed_clk_rate) {
+               clk = clk_register_fixed_rate(&pdata->adev->dev,
+                                       dev_name(&pdata->adev->dev),
+                                       NULL, CLK_IS_ROOT,
+                                       dev_desc->fixed_clk_rate);
+               clk_register_clkdev(clk, NULL, dev_name(&pdata->adev->dev));
+               pdata->clk = clk;
+       }
+
+       return 0;
+}
+
+static struct apd_device_desc cz_i2c_desc = {
+       .setup = acpi_apd_setup,
+       .fixed_clk_rate = 133000000,
+};
+
+static struct apd_device_desc cz_uart_desc = {
+       .setup = acpi_apd_setup,
+       .fixed_clk_rate = 48000000,
+};
+
+#else
+
+#define APD_ADDR(desc) (0UL)
+
+#endif /* CONFIG_X86_AMD_PLATFORM_DEVICE */
+
+/**
+* Create platform device during acpi scan attach handle.
+* Return value > 0 on success of creating device.
+*/
+static int acpi_apd_create_device(struct acpi_device *adev,
+                                  const struct acpi_device_id *id)
+{
+       const struct apd_device_desc *dev_desc = (void *)id->driver_data;
+       struct apd_private_data *pdata;
+       struct platform_device *pdev;
+       int ret;
+
+       if (!dev_desc) {
+               pdev = acpi_create_platform_device(adev);
+               return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1;
+       }
+
+       pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
+       if (!pdata)
+               return -ENOMEM;
+
+       pdata->adev = adev;
+       pdata->dev_desc = dev_desc;
+
+       if (dev_desc->setup) {
+               ret = dev_desc->setup(pdata);
+               if (ret)
+                       goto err_out;
+       }
+
+       adev->driver_data = pdata;
+       pdev = acpi_create_platform_device(adev);
+       if (!IS_ERR_OR_NULL(pdev))
+               return 1;
+
+       ret = PTR_ERR(pdev);
+       adev->driver_data = NULL;
+
+ err_out:
+       kfree(pdata);
+       return ret;
+}
+
+static const struct acpi_device_id acpi_apd_device_ids[] = {
+       /* Generic apd devices */
+       { "AMD0010", APD_ADDR(cz_i2c_desc) },
+       { "AMD0020", APD_ADDR(cz_uart_desc) },
+       { "AMD0030", },
+       { }
+};
+
+static struct acpi_scan_handler apd_handler = {
+       .ids = acpi_apd_device_ids,
+       .attach = acpi_apd_create_device,
+};
+
+void __init acpi_apd_init(void)
+{
+       acpi_scan_add_handler(&apd_handler);
+}
index e75737fd7eefbc80de3dc8731fcbec94ab321f89..02e835f3cf8aa76326b9994768b75f2d39c39fed 100644 (file)
@@ -125,7 +125,7 @@ static struct lpss_device_desc lpt_dev_desc = {
 };
 
 static struct lpss_device_desc lpt_i2c_dev_desc = {
-       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR,
+       .flags = LPSS_CLK | LPSS_LTR,
        .prv_offset = 0x800,
 };
 
@@ -307,7 +307,7 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
 {
        struct lpss_device_desc *dev_desc;
        struct lpss_private_data *pdata;
-       struct resource_list_entry *rentry;
+       struct resource_entry *rentry;
        struct list_head resource_list;
        struct platform_device *pdev;
        int ret;
@@ -327,13 +327,15 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
                goto err_out;
 
        list_for_each_entry(rentry, &resource_list, node)
-               if (resource_type(&rentry->res) == IORESOURCE_MEM) {
+               if (resource_type(rentry->res) == IORESOURCE_MEM) {
                        if (dev_desc->prv_size_override)
                                pdata->mmio_size = dev_desc->prv_size_override;
                        else
-                               pdata->mmio_size = resource_size(&rentry->res);
-                       pdata->mmio_base = ioremap(rentry->res.start,
+                               pdata->mmio_size = resource_size(rentry->res);
+                       pdata->mmio_base = ioremap(rentry->res->start,
                                                   pdata->mmio_size);
+                       if (!pdata->mmio_base)
+                               goto err_out;
                        break;
                }
 
index 23e2319ead41c33402c942646c0ba11e35510d96..ee28f4d15625f433a68f0bcfa3dfc6ff521dd388 100644 (file)
@@ -101,8 +101,8 @@ acpi_memory_get_resource(struct acpi_resource *resource, void *context)
                /* Can we combine the resource range information? */
                if ((info->caching == address64.info.mem.caching) &&
                    (info->write_protect == address64.info.mem.write_protect) &&
-                   (info->start_addr + info->length == address64.minimum)) {
-                       info->length += address64.address_length;
+                   (info->start_addr + info->length == address64.address.minimum)) {
+                       info->length += address64.address.address_length;
                        return AE_OK;
                }
        }
@@ -114,8 +114,8 @@ acpi_memory_get_resource(struct acpi_resource *resource, void *context)
        INIT_LIST_HEAD(&new->list);
        new->caching = address64.info.mem.caching;
        new->write_protect = address64.info.mem.write_protect;
-       new->start_addr = address64.minimum;
-       new->length = address64.address_length;
+       new->start_addr = address64.address.minimum;
+       new->length = address64.address.address_length;
        list_add_tail(&new->list, &mem_device->res_list);
 
        return AE_OK;
index 6ba8beb6b9d2a515828c3987393dc75fa8b6bc2c..1284138e42ab486d584f61e3f90bbaf0320a1a5b 100644 (file)
@@ -45,7 +45,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
        struct platform_device *pdev = NULL;
        struct acpi_device *acpi_parent;
        struct platform_device_info pdevinfo;
-       struct resource_list_entry *rentry;
+       struct resource_entry *rentry;
        struct list_head resource_list;
        struct resource *resources = NULL;
        int count;
@@ -71,7 +71,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
                }
                count = 0;
                list_for_each_entry(rentry, &resource_list, node)
-                       resources[count++] = rentry->res;
+                       resources[count++] = *rentry->res;
 
                acpi_dev_free_resource_list(&resource_list);
        }
index 3d2c88289da9ff88a0dde088ee5a3aba30362131..d863016565b51acddd570f6440587fa5e13e32de 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -47,7 +47,7 @@
 /* Common info for tool signons */
 
 #define ACPICA_NAME                 "Intel ACPI Component Architecture"
-#define ACPICA_COPYRIGHT            "Copyright (c) 2000 - 2014 Intel Corporation"
+#define ACPICA_COPYRIGHT            "Copyright (c) 2000 - 2015 Intel Corporation"
 
 #if ACPI_MACHINE_WIDTH == 64
 #define ACPI_WIDTH          "-64"
index 6f1c616910acad10e379a320933cb10be08f9d94..853aa2dbdb61d203d7d01090895620c1929f1421 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1d026ff1683f381fffaae26cfa9262d9e208416d..4169bb87a99690c98dc5d5c81d3eb62925e98370 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d3e2cc395d7fad8bbd862b72162dd6b74c0515d9..408f04bcaab440b9a918b746b03005bbf4c1e8c0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7a7811a9fc261e8bc48a2ee3fb83a268382e1ec6..228704b7865774695764286a32aab5c78490a26b 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -143,8 +143,6 @@ acpi_ev_match_gpe_method(acpi_handle obj_handle,
 acpi_status
 acpi_ev_walk_gpe_list(acpi_gpe_callback gpe_walk_callback, void *context);
 
-u8 acpi_ev_valid_gpe_event(struct acpi_gpe_event_info *gpe_event_info);
-
 acpi_status
 acpi_ev_get_gpe_device(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
                       struct acpi_gpe_block_info *gpe_block, void *context);
index 7f60582d0c8ce1c99bdeca6ba30b647574c33beb..a165d25343e8d535afa1b3456aeda7a667b7a8c9 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c318d3e27893b6a4c6436ca5b906edcd6df3b656..196a552445591b489251773045a943aaafc62847 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b01f71ce05230f58f0e884d38d59b8c222d3eb99..1886bde54b5d323e0d9543dbac41245d7b8e1075 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 680d23bbae7c4ec293df73a01966a29a345fb6ce..7add32e5d8c55da7d7ac8958d4813070d5168ce2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4bceb11c7380d118020b512a93a267c4c6b3114a..cf607fe69dbd2274ca6bc079bf60b0af8fbbd674 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index ee1c040f321c621bc33ed8dd66c2c9426e6f3801..952fbe0b7231a79c62f463bd36e8bce596c8c478 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8abb393dafabe621a48e4195fd3ae899eca28b8b..3e9720e1f34f79464abbeb0fe7cb6ca7977a7fe0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index dda0e6affcf1ccb1199320ff4ede74f471ebbad5..a5f17de45ac62c6b24d846b3f0f802c1e2080ce7 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6168b85463edc1b0b68b5405a8e64ed449b2bdd8..74a390c6db16997785d7a02a74a7f97f4ce831f0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index bd3908d26c4f1afa97db913e905620195daa0587..a972d11c97c97e26504a71e82302a93b0890b096 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4b008e8884a111326fe17f5a1f5ee2adc028f8f6..efc4c7124ccc6620cda8003db4238ff49597fb79 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index cf7346110bd825e2cd1d514d6eed507b2e262421..d14b547b7cd5fa888d8bfb2cf9a9e9e114e81656 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1afe46e44dacda512b5429e3f58465aabf95082c..1c127a43017b8b50e658af366438eb5fc498c195 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 486d342e74b6d35207f4be964b340da1cc07a6b8..c2f03e8774adfa4c5897333ad084ea3be304d5be 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 5908ccec6aea8b5858338cf5b2a7de95bdbb118d..3a95068fc11988d89ac27763fc45d1eeeadf7544 100644 (file)
@@ -7,7 +7,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3a0beeb86ba5cbb46f2cb9c8c86b31fcf7fd9c22..ee0cdd60b93d406a14d60428b759840af2f75e52 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 720b1cdda7113665da0883681c3b0765a4d63245..3e6989738e85f8d24577fd01b2ca084c7184b3ed 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8daf9de82b73431d630d0e31f5f86abdaa8baa84..39da9da62bbf5337a902d7d1786c52a3120d2ffb 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c57666196672016d207805775cfada6875363e10..43b40de90484cbb9f44338bb2d74dbb4bf4af291 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index aee5e45f6d35ecd39eb751e0008baeb9f252cfcc..bbe74bcebbae882f2bb1da3211b4f4aad347534c 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3c7f7378b94d1692af3266e96902e587f2b8379b..d72565a3c646ddb6b4c819019da46b3320f19024 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b67522df01ace08274a2ce93d5c555e31a4aa75f..2e4c42b377eca5a0825d0a1382bdc07f5ded213f 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a1e7e6b6fcf7f89a8c87a5153d7c58bd8f0e80ec..8a7b07b6adc81a761cefcb2689c82de187eb82b2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6c0759c0db470ab520be6accf81e66ee87ed6e58..77244182ff02988accb670dfa3b10acce5f4e4ac 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 9f74795e226853e516dc4525b7f9b09420ef57bc..e5ff89bcb3f5ddf737a6847be00e7ab9b93d8194 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index f7f5107e754da79f4b2459cc3e2c648cda9fb489..df54d46225cdf6e4401234300cc3db195993a337 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 15623da26200cdf852c90f72aa22648b4535ba09..843942fb4be501c8dcc3d15ced1729a152489011 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2ac28d297305c5ec6183ff81b4e38f4d1ce7b9c8..fcaa30c611fb186a0418d343b77501328a7d3647 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 9d6e2c1de1f89e5f6420ddfa52d1863766f8d986..43b3ea40c0b66eb554c297a833bfc507a735238f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 24f7d5ea678a0420570fd6e8711108905083cee2..89ac2022465e72c804dfd0ba5fa56a4db0424cfe 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c7bffff9ed32c97e4c35f8ef5fa0c1372c82687a..bf6873f95e7226e4f98b28f12ca0502d4f92e719 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3393a73ca0d68f8d9c219cfd6d374f9ef09a5c92..b78dc7c6d5d72a64ffd1d1ba6dde36aeed38e2b2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index aa70154cf4fa2e447fa0927411d3dd0abbb74a9d..5ed064e8673c682a08798d221f5a9087e3a72e49 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -114,17 +114,6 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
 
        ACPI_FUNCTION_TRACE(ev_enable_gpe);
 
-       /*
-        * We will only allow a GPE to be enabled if it has either an associated
-        * method (_Lxx/_Exx) or a handler, or is using the implicit notify
-        * feature. Otherwise, the GPE will be immediately disabled by
-        * acpi_ev_gpe_dispatch the first time it fires.
-        */
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
-           ACPI_GPE_DISPATCH_NONE) {
-               return_ACPI_STATUS(AE_NO_HANDLER);
-       }
-
        /* Clear the GPE (of stale events) */
 
        status = acpi_hw_clear_gpe(gpe_event_info);
@@ -339,7 +328,11 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info *gpe_xrupt_list)
 {
        acpi_status status;
        struct acpi_gpe_block_info *gpe_block;
+       struct acpi_namespace_node *gpe_device;
        struct acpi_gpe_register_info *gpe_register_info;
+       struct acpi_gpe_event_info *gpe_event_info;
+       u32 gpe_number;
+       struct acpi_gpe_handler_info *gpe_handler_info;
        u32 int_status = ACPI_INTERRUPT_NOT_HANDLED;
        u8 enabled_status_byte;
        u32 status_reg;
@@ -367,6 +360,8 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info *gpe_xrupt_list)
 
        gpe_block = gpe_xrupt_list->gpe_block_list_head;
        while (gpe_block) {
+               gpe_device = gpe_block->node;
+
                /*
                 * Read all of the 8-bit GPE status and enable registers in this GPE
                 * block, saving all of them. Find all currently active GP events.
@@ -442,16 +437,68 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info *gpe_xrupt_list)
 
                                /* Examine one GPE bit */
 
+                               gpe_event_info =
+                                   &gpe_block->
+                                   event_info[((acpi_size) i *
+                                               ACPI_GPE_REGISTER_WIDTH) + j];
+                               gpe_number =
+                                   j + gpe_register_info->base_gpe_number;
+
                                if (enabled_status_byte & (1 << j)) {
-                                       /*
-                                        * Found an active GPE. Dispatch the event to a handler
-                                        * or method.
-                                        */
-                                       int_status |=
-                                           acpi_ev_gpe_dispatch(gpe_block->
-                                                                node,
-                                                                &gpe_block->
-                                                                event_info[((acpi_size) i * ACPI_GPE_REGISTER_WIDTH) + j], j + gpe_register_info->base_gpe_number);
+
+                                       /* Invoke global event handler if present */
+
+                                       acpi_gpe_count++;
+                                       if (acpi_gbl_global_event_handler) {
+                                               acpi_gbl_global_event_handler
+                                                   (ACPI_EVENT_TYPE_GPE,
+                                                    gpe_device, gpe_number,
+                                                    acpi_gbl_global_event_handler_context);
+                                       }
+
+                                       /* Found an active GPE */
+
+                                       if (ACPI_GPE_DISPATCH_TYPE
+                                           (gpe_event_info->flags) ==
+                                           ACPI_GPE_DISPATCH_RAW_HANDLER) {
+
+                                               /* Dispatch the event to a raw handler */
+
+                                               gpe_handler_info =
+                                                   gpe_event_info->dispatch.
+                                                   handler;
+
+                                               /*
+                                                * There is no protection around the namespace node
+                                                * and the GPE handler to ensure a safe destruction
+                                                * because:
+                                                * 1. The namespace node is expected to always
+                                                *    exist after loading a table.
+                                                * 2. The GPE handler is expected to be flushed by
+                                                *    acpi_os_wait_events_complete() before the
+                                                *    destruction.
+                                                */
+                                               acpi_os_release_lock
+                                                   (acpi_gbl_gpe_lock, flags);
+                                               int_status |=
+                                                   gpe_handler_info->
+                                                   address(gpe_device,
+                                                           gpe_number,
+                                                           gpe_handler_info->
+                                                           context);
+                                               flags =
+                                                   acpi_os_acquire_lock
+                                                   (acpi_gbl_gpe_lock);
+                                       } else {
+                                               /*
+                                                * Dispatch the event to a standard handler or
+                                                * method.
+                                                */
+                                               int_status |=
+                                                   acpi_ev_gpe_dispatch
+                                                   (gpe_device, gpe_event_info,
+                                                    gpe_number);
+                                       }
                                }
                        }
                }
@@ -484,52 +531,15 @@ unlock_and_exit:
 static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
 {
        struct acpi_gpe_event_info *gpe_event_info = context;
-       acpi_status status;
-       struct acpi_gpe_event_info *local_gpe_event_info;
+       acpi_status status = AE_OK;
        struct acpi_evaluate_info *info;
        struct acpi_gpe_notify_info *notify;
 
        ACPI_FUNCTION_TRACE(ev_asynch_execute_gpe_method);
 
-       /* Allocate a local GPE block */
-
-       local_gpe_event_info =
-           ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_gpe_event_info));
-       if (!local_gpe_event_info) {
-               ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY, "while handling a GPE"));
-               return_VOID;
-       }
-
-       status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
-       if (ACPI_FAILURE(status)) {
-               ACPI_FREE(local_gpe_event_info);
-               return_VOID;
-       }
-
-       /* Must revalidate the gpe_number/gpe_block */
-
-       if (!acpi_ev_valid_gpe_event(gpe_event_info)) {
-               status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
-               ACPI_FREE(local_gpe_event_info);
-               return_VOID;
-       }
-
-       /*
-        * Take a snapshot of the GPE info for this level - we copy the info to
-        * prevent a race condition with remove_handler/remove_block.
-        */
-       ACPI_MEMCPY(local_gpe_event_info, gpe_event_info,
-                   sizeof(struct acpi_gpe_event_info));
-
-       status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
-       if (ACPI_FAILURE(status)) {
-               ACPI_FREE(local_gpe_event_info);
-               return_VOID;
-       }
-
        /* Do the correct dispatch - normal method or implicit notify */
 
-       switch (local_gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) {
+       switch (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags)) {
        case ACPI_GPE_DISPATCH_NOTIFY:
                /*
                 * Implicit notify.
@@ -542,7 +552,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
                 * June 2012: Expand implicit notify mechanism to support
                 * notifies on multiple device objects.
                 */
-               notify = local_gpe_event_info->dispatch.notify_list;
+               notify = gpe_event_info->dispatch.notify_list;
                while (ACPI_SUCCESS(status) && notify) {
                        status =
                            acpi_ev_queue_notify_request(notify->device_node,
@@ -566,7 +576,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
                         * _Lxx/_Exx control method that corresponds to this GPE
                         */
                        info->prefix_node =
-                           local_gpe_event_info->dispatch.method_node;
+                           gpe_event_info->dispatch.method_node;
                        info->flags = ACPI_IGNORE_RETURN_VALUE;
 
                        status = acpi_ns_evaluate(info);
@@ -576,25 +586,27 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
                if (ACPI_FAILURE(status)) {
                        ACPI_EXCEPTION((AE_INFO, status,
                                        "while evaluating GPE method [%4.4s]",
-                                       acpi_ut_get_node_name
-                                       (local_gpe_event_info->dispatch.
-                                        method_node)));
+                                       acpi_ut_get_node_name(gpe_event_info->
+                                                             dispatch.
+                                                             method_node)));
                }
                break;
 
        default:
 
-               return_VOID;    /* Should never happen */
+               goto error_exit;        /* Should never happen */
        }
 
        /* Defer enabling of GPE until all notify handlers are done */
 
        status = acpi_os_execute(OSL_NOTIFY_HANDLER,
-                                acpi_ev_asynch_enable_gpe,
-                                local_gpe_event_info);
-       if (ACPI_FAILURE(status)) {
-               ACPI_FREE(local_gpe_event_info);
+                                acpi_ev_asynch_enable_gpe, gpe_event_info);
+       if (ACPI_SUCCESS(status)) {
+               return_VOID;
        }
+
+error_exit:
+       acpi_ev_asynch_enable_gpe(gpe_event_info);
        return_VOID;
 }
 
@@ -622,7 +634,6 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_enable_gpe(void *context)
        (void)acpi_ev_finish_gpe(gpe_event_info);
        acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
 
-       ACPI_FREE(gpe_event_info);
        return;
 }
 
@@ -692,15 +703,6 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
 
        ACPI_FUNCTION_TRACE(ev_gpe_dispatch);
 
-       /* Invoke global event handler if present */
-
-       acpi_gpe_count++;
-       if (acpi_gbl_global_event_handler) {
-               acpi_gbl_global_event_handler(ACPI_EVENT_TYPE_GPE, gpe_device,
-                                             gpe_number,
-                                             acpi_gbl_global_event_handler_context);
-       }
-
        /*
         * Always disable the GPE so that it does not keep firing before
         * any asynchronous activity completes (either from the execution
@@ -741,7 +743,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
         * If there is neither a handler nor a method, leave the GPE
         * disabled.
         */
-       switch (gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) {
+       switch (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags)) {
        case ACPI_GPE_DISPATCH_HANDLER:
 
                /* Invoke the installed handler (at interrupt level) */
index d86699eea33c499e96f41bdc139f531d849ee9b7..e0f24c50451343ea79f5782d07749c6e4f366fb4 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -474,10 +474,12 @@ acpi_ev_initialize_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
                         * Ignore GPEs that have no corresponding _Lxx/_Exx method
                         * and GPEs that are used to wake the system
                         */
-                       if (((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
+                       if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
                             ACPI_GPE_DISPATCH_NONE)
-                           || ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK)
-                               == ACPI_GPE_DISPATCH_HANDLER)
+                           || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+                               ACPI_GPE_DISPATCH_HANDLER)
+                           || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+                               ACPI_GPE_DISPATCH_RAW_HANDLER)
                            || (gpe_event_info->flags & ACPI_GPE_CAN_WAKE)) {
                                continue;
                        }
index 7be9283798795e72ec8eadf4bbc6a8a894807fc7..8840296d5b205080045de3fd93a97f046bcb91f5 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -401,15 +401,17 @@ acpi_ev_match_gpe_method(acpi_handle obj_handle,
                return_ACPI_STATUS(AE_OK);
        }
 
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
-           ACPI_GPE_DISPATCH_HANDLER) {
+       if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+            ACPI_GPE_DISPATCH_HANDLER) ||
+           (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+            ACPI_GPE_DISPATCH_RAW_HANDLER)) {
 
                /* If there is already a handler, ignore this GPE method */
 
                return_ACPI_STATUS(AE_OK);
        }
 
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
+       if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
            ACPI_GPE_DISPATCH_METHOD) {
                /*
                 * If there is already a method, ignore this method. But check
index 17e4bbfdb096c2e9c522b2a387abfed80943d2aa..3a958f3612fe6860ff7082660cf89f30988fee85 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -106,53 +106,6 @@ unlock_and_exit:
        return_ACPI_STATUS(status);
 }
 
-/*******************************************************************************
- *
- * FUNCTION:    acpi_ev_valid_gpe_event
- *
- * PARAMETERS:  gpe_event_info              - Info for this GPE
- *
- * RETURN:      TRUE if the gpe_event is valid
- *
- * DESCRIPTION: Validate a GPE event. DO NOT CALL FROM INTERRUPT LEVEL.
- *              Should be called only when the GPE lists are semaphore locked
- *              and not subject to change.
- *
- ******************************************************************************/
-
-u8 acpi_ev_valid_gpe_event(struct acpi_gpe_event_info *gpe_event_info)
-{
-       struct acpi_gpe_xrupt_info *gpe_xrupt_block;
-       struct acpi_gpe_block_info *gpe_block;
-
-       ACPI_FUNCTION_ENTRY();
-
-       /* No need for spin lock since we are not changing any list elements */
-
-       /* Walk the GPE interrupt levels */
-
-       gpe_xrupt_block = acpi_gbl_gpe_xrupt_list_head;
-       while (gpe_xrupt_block) {
-               gpe_block = gpe_xrupt_block->gpe_block_list_head;
-
-               /* Walk the GPE blocks on this interrupt level */
-
-               while (gpe_block) {
-                       if ((&gpe_block->event_info[0] <= gpe_event_info) &&
-                           (&gpe_block->event_info[gpe_block->gpe_count] >
-                            gpe_event_info)) {
-                               return (TRUE);
-                       }
-
-                       gpe_block = gpe_block->next;
-               }
-
-               gpe_xrupt_block = gpe_xrupt_block->next;
-       }
-
-       return (FALSE);
-}
-
 /*******************************************************************************
  *
  * FUNCTION:    acpi_ev_get_gpe_device
@@ -371,8 +324,10 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
                                                                 ACPI_GPE_REGISTER_WIDTH)
                                                                + j];
 
-                       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
-                           ACPI_GPE_DISPATCH_HANDLER) {
+                       if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+                            ACPI_GPE_DISPATCH_HANDLER) ||
+                           (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+                            ACPI_GPE_DISPATCH_RAW_HANDLER)) {
 
                                /* Delete an installed handler block */
 
@@ -380,10 +335,8 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
                                gpe_event_info->dispatch.handler = NULL;
                                gpe_event_info->flags &=
                                    ~ACPI_GPE_DISPATCH_MASK;
-                       } else
-                           if ((gpe_event_info->
-                                flags & ACPI_GPE_DISPATCH_MASK) ==
-                               ACPI_GPE_DISPATCH_NOTIFY) {
+                       } else if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags)
+                                  == ACPI_GPE_DISPATCH_NOTIFY) {
 
                                /* Delete the implicit notification device list */
 
index 78ac29351c9e1736880471724849c2028dd5b5c7..74e8595f5a2bbf5dbfc82ff3e5d1e3b99c3b866a 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 24ea3424981bd9e2128eb694bb6c3ea8bb0ecb7e..f7c9dfe7b990134a3b773380868af4e5dcb4b985 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8eb8575e8c1623fcd3bc907e180111c0aa68df55..9abace3401f9a00891a3e6851ee81001a404aea0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1b148a440d67f545ebf1db7d12829de790b0a234..da323390bb70688a8361b8c3022b76641a05fa54 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 29630e303829e22ebaa44a75b0bd8b7f334a549d..0366703d29701a9f7067f82f8a7430cee79cccbf 100644 (file)
@@ -6,7 +6,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 55a58f3ec8dfa66de2ac6465ad71b7cacca1db58..81f2d9e87fad518369acc5eaca9d99172f22fdf5 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
 
 #define _COMPONENT          ACPI_EVENTS
 ACPI_MODULE_NAME("evxface")
+#if (!ACPI_REDUCED_HARDWARE)
+/* Local prototypes */
+static acpi_status
+acpi_ev_install_gpe_handler(acpi_handle gpe_device,
+                           u32 gpe_number,
+                           u32 type,
+                           u8 is_raw_handler,
+                           acpi_gpe_handler address, void *context);
+
+#endif
 
 
 /*******************************************************************************
@@ -76,6 +86,7 @@ ACPI_MODULE_NAME("evxface")
  *              handlers.
  *
  ******************************************************************************/
+
 acpi_status
 acpi_install_notify_handler(acpi_handle device,
                            u32 handler_type,
@@ -717,32 +728,37 @@ ACPI_EXPORT_SYMBOL(acpi_remove_fixed_event_handler)
 
 /*******************************************************************************
  *
- * FUNCTION:    acpi_install_gpe_handler
+ * FUNCTION:    acpi_ev_install_gpe_handler
  *
  * PARAMETERS:  gpe_device      - Namespace node for the GPE (NULL for FADT
  *                                defined GPEs)
  *              gpe_number      - The GPE number within the GPE block
  *              type            - Whether this GPE should be treated as an
  *                                edge- or level-triggered interrupt.
+ *              is_raw_handler  - Whether this GPE should be handled using
+ *                                the special GPE handler mode.
  *              address         - Address of the handler
  *              context         - Value passed to the handler on each GPE
  *
  * RETURN:      Status
  *
- * DESCRIPTION: Install a handler for a General Purpose Event.
+ * DESCRIPTION: Internal function to install a handler for a General Purpose
+ *              Event.
  *
  ******************************************************************************/
-acpi_status
-acpi_install_gpe_handler(acpi_handle gpe_device,
-                        u32 gpe_number,
-                        u32 type, acpi_gpe_handler address, void *context)
+static acpi_status
+acpi_ev_install_gpe_handler(acpi_handle gpe_device,
+                           u32 gpe_number,
+                           u32 type,
+                           u8 is_raw_handler,
+                           acpi_gpe_handler address, void *context)
 {
        struct acpi_gpe_event_info *gpe_event_info;
        struct acpi_gpe_handler_info *handler;
        acpi_status status;
        acpi_cpu_flags flags;
 
-       ACPI_FUNCTION_TRACE(acpi_install_gpe_handler);
+       ACPI_FUNCTION_TRACE(ev_install_gpe_handler);
 
        /* Parameter validation */
 
@@ -775,8 +791,10 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
 
        /* Make sure that there isn't a handler there already */
 
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
-           ACPI_GPE_DISPATCH_HANDLER) {
+       if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+            ACPI_GPE_DISPATCH_HANDLER) ||
+           (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
+            ACPI_GPE_DISPATCH_RAW_HANDLER)) {
                status = AE_ALREADY_EXISTS;
                goto free_and_exit;
        }
@@ -793,9 +811,10 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
         * automatically during initialization, in which case it has to be
         * disabled now to avoid spurious execution of the handler.
         */
-       if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) ||
-            (handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) &&
-           gpe_event_info->runtime_count) {
+       if (((ACPI_GPE_DISPATCH_TYPE(handler->original_flags) ==
+             ACPI_GPE_DISPATCH_METHOD) ||
+            (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) ==
+             ACPI_GPE_DISPATCH_NOTIFY)) && gpe_event_info->runtime_count) {
                handler->originally_enabled = TRUE;
                (void)acpi_ev_remove_gpe_reference(gpe_event_info);
 
@@ -816,7 +835,10 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
 
        gpe_event_info->flags &=
            ~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK);
-       gpe_event_info->flags |= (u8)(type | ACPI_GPE_DISPATCH_HANDLER);
+       gpe_event_info->flags |=
+           (u8)(type |
+                (is_raw_handler ? ACPI_GPE_DISPATCH_RAW_HANDLER :
+                 ACPI_GPE_DISPATCH_HANDLER));
 
        acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
 
@@ -830,8 +852,76 @@ free_and_exit:
        goto unlock_and_exit;
 }
 
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_install_gpe_handler
+ *
+ * PARAMETERS:  gpe_device      - Namespace node for the GPE (NULL for FADT
+ *                                defined GPEs)
+ *              gpe_number      - The GPE number within the GPE block
+ *              type            - Whether this GPE should be treated as an
+ *                                edge- or level-triggered interrupt.
+ *              address         - Address of the handler
+ *              context         - Value passed to the handler on each GPE
+ *
+ * RETURN:      Status
+ *
+ * DESCRIPTION: Install a handler for a General Purpose Event.
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_install_gpe_handler(acpi_handle gpe_device,
+                        u32 gpe_number,
+                        u32 type, acpi_gpe_handler address, void *context)
+{
+       acpi_status status;
+
+       ACPI_FUNCTION_TRACE(acpi_install_gpe_handler);
+
+       status =
+           acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, FALSE,
+                                       address, context);
+
+       return_ACPI_STATUS(status);
+}
+
 ACPI_EXPORT_SYMBOL(acpi_install_gpe_handler)
 
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_install_gpe_raw_handler
+ *
+ * PARAMETERS:  gpe_device      - Namespace node for the GPE (NULL for FADT
+ *                                defined GPEs)
+ *              gpe_number      - The GPE number within the GPE block
+ *              type            - Whether this GPE should be treated as an
+ *                                edge- or level-triggered interrupt.
+ *              address         - Address of the handler
+ *              context         - Value passed to the handler on each GPE
+ *
+ * RETURN:      Status
+ *
+ * DESCRIPTION: Install a handler for a General Purpose Event.
+ *
+ ******************************************************************************/
+acpi_status
+acpi_install_gpe_raw_handler(acpi_handle gpe_device,
+                            u32 gpe_number,
+                            u32 type, acpi_gpe_handler address, void *context)
+{
+       acpi_status status;
+
+       ACPI_FUNCTION_TRACE(acpi_install_gpe_raw_handler);
+
+       status = acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, TRUE,
+                                            address, context);
+
+       return_ACPI_STATUS(status);
+}
+
+ACPI_EXPORT_SYMBOL(acpi_install_gpe_raw_handler)
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_remove_gpe_handler
@@ -880,8 +970,10 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
 
        /* Make sure that a handler is indeed installed */
 
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) !=
-           ACPI_GPE_DISPATCH_HANDLER) {
+       if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
+            ACPI_GPE_DISPATCH_HANDLER) &&
+           (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
+            ACPI_GPE_DISPATCH_RAW_HANDLER)) {
                status = AE_NOT_EXIST;
                goto unlock_and_exit;
        }
@@ -896,6 +988,7 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
        /* Remove the handler */
 
        handler = gpe_event_info->dispatch.handler;
+       gpe_event_info->dispatch.handler = NULL;
 
        /* Restore Method node (if any), set dispatch flags */
 
@@ -909,9 +1002,10 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
         * enabled, it should be enabled at this point to restore the
         * post-initialization configuration.
         */
-       if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) ||
-            (handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) &&
-           handler->originally_enabled) {
+       if (((ACPI_GPE_DISPATCH_TYPE(handler->original_flags) ==
+             ACPI_GPE_DISPATCH_METHOD) ||
+            (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) ==
+             ACPI_GPE_DISPATCH_NOTIFY)) && handler->originally_enabled) {
                (void)acpi_ev_add_gpe_reference(gpe_event_info);
        }
 
index bb8cbf5961bf46de0cf241e18bd76cf18c6f8b6e..df06a23c4197cbbb8755a9f2363a4eb951484a21 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index e889a5304abd2fe9206b4a9a911378f4f4a0e050..70eb47e3d724100a6a0b5cdecb1dd4208b1afa13 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -132,7 +132,7 @@ acpi_status acpi_enable_gpe(acpi_handle gpe_device, u32 gpe_number)
         */
        gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number);
        if (gpe_event_info) {
-               if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) !=
+               if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
                    ACPI_GPE_DISPATCH_NONE) {
                        status = acpi_ev_add_gpe_reference(gpe_event_info);
                } else {
@@ -183,6 +183,77 @@ acpi_status acpi_disable_gpe(acpi_handle gpe_device, u32 gpe_number)
 
 ACPI_EXPORT_SYMBOL(acpi_disable_gpe)
 
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_set_gpe
+ *
+ * PARAMETERS:  gpe_device          - Parent GPE Device. NULL for GPE0/GPE1
+ *              gpe_number          - GPE level within the GPE block
+ *              action              - ACPI_GPE_ENABLE or ACPI_GPE_DISABLE
+ *
+ * RETURN:      Status
+ *
+ * DESCRIPTION: Enable or disable an individual GPE. This function bypasses
+ *              the reference count mechanism used in the acpi_enable_gpe(),
+ *              acpi_disable_gpe() interfaces.
+ *              This API is typically used by the GPE raw handler mode driver
+ *              to switch between the polling mode and the interrupt mode after
+ *              the driver has enabled the GPE.
+ *              The APIs should be invoked in this order:
+ *               acpi_enable_gpe()            <- Ensure the reference count > 0
+ *               acpi_set_gpe(ACPI_GPE_DISABLE) <- Enter polling mode
+ *               acpi_set_gpe(ACPI_GPE_ENABLE) <- Leave polling mode
+ *               acpi_disable_gpe()           <- Decrease the reference count
+ *
+ * Note: If a GPE is shared by 2 silicon components, then both the drivers
+ *       should support GPE polling mode or disabling the GPE for long period
+ *       for one driver may break the other. So use it with care since all
+ *       firmware _Lxx/_Exx handlers currently rely on the GPE interrupt mode.
+ *
+ ******************************************************************************/
+acpi_status acpi_set_gpe(acpi_handle gpe_device, u32 gpe_number, u8 action)
+{
+       struct acpi_gpe_event_info *gpe_event_info;
+       acpi_status status;
+       acpi_cpu_flags flags;
+
+       ACPI_FUNCTION_TRACE(acpi_set_gpe);
+
+       flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
+
+       /* Ensure that we have a valid GPE number */
+
+       gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number);
+       if (!gpe_event_info) {
+               status = AE_BAD_PARAMETER;
+               goto unlock_and_exit;
+       }
+
+       /* Perform the action */
+
+       switch (action) {
+       case ACPI_GPE_ENABLE:
+
+               status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+               break;
+
+       case ACPI_GPE_DISABLE:
+
+               status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE);
+               break;
+
+       default:
+
+               status = AE_BAD_PARAMETER;
+               break;
+       }
+
+unlock_and_exit:
+       acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
+       return_ACPI_STATUS(status);
+}
+
+ACPI_EXPORT_SYMBOL(acpi_set_gpe)
 
 /*******************************************************************************
  *
@@ -313,7 +384,7 @@ acpi_setup_gpe_for_wake(acpi_handle wake_device,
         * known as an "implicit notify". Note: The GPE is assumed to be
         * level-triggered (for windows compatibility).
         */
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
+       if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
            ACPI_GPE_DISPATCH_NONE) {
                /*
                 * This is the first device for implicit notify on this GPE.
@@ -327,7 +398,7 @@ acpi_setup_gpe_for_wake(acpi_handle wake_device,
         * If we already have an implicit notify on this GPE, add
         * this device to the notify list.
         */
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
+       if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
            ACPI_GPE_DISPATCH_NOTIFY) {
 
                /* Ensure that the device is not already in the list */
@@ -530,6 +601,49 @@ unlock_and_exit:
 
 ACPI_EXPORT_SYMBOL(acpi_get_gpe_status)
 
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_finish_gpe
+ *
+ * PARAMETERS:  gpe_device          - Namespace node for the GPE Block
+ *                                    (NULL for FADT defined GPEs)
+ *              gpe_number          - GPE level within the GPE block
+ *
+ * RETURN:      Status
+ *
+ * DESCRIPTION: Clear and conditionally reenable a GPE. This completes the GPE
+ *              processing. Intended for use by asynchronous host-installed
+ *              GPE handlers. The GPE is only reenabled if the enable_for_run bit
+ *              is set in the GPE info.
+ *
+ ******************************************************************************/
+acpi_status acpi_finish_gpe(acpi_handle gpe_device, u32 gpe_number)
+{
+       struct acpi_gpe_event_info *gpe_event_info;
+       acpi_status status;
+       acpi_cpu_flags flags;
+
+       ACPI_FUNCTION_TRACE(acpi_finish_gpe);
+
+       flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
+
+       /* Ensure that we have a valid GPE number */
+
+       gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number);
+       if (!gpe_event_info) {
+               status = AE_BAD_PARAMETER;
+               goto unlock_and_exit;
+       }
+
+       status = acpi_ev_finish_gpe(gpe_event_info);
+
+unlock_and_exit:
+       acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
+       return_ACPI_STATUS(status);
+}
+
+ACPI_EXPORT_SYMBOL(acpi_finish_gpe)
+
 /******************************************************************************
  *
  * FUNCTION:    acpi_disable_all_gpes
@@ -604,7 +718,6 @@ ACPI_EXPORT_SYMBOL(acpi_enable_all_runtime_gpes)
  *              all GPE blocks.
  *
  ******************************************************************************/
-
 acpi_status acpi_enable_all_wakeup_gpes(void)
 {
        acpi_status status;
index 2d6f187939c703db962bac5fbfad6ca9c9140bfa..f21afbab03f74b8f796b9209b22b4291d856db3c 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7d2949420db7085c3a1f696e601eb426133aafee..6e0df2b9d5a475f329f2d40bfda2b592a26b764c 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c545386fee96cd825f20a5099e1f82aeae39bc85..89a976b4ccf2ad9f8d01ed702b2288d5080c64a2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 95d23dabcfbbe00e4221818b468c8e92483f5942..aaeea4840aaa367cc7599809d1ac5a80e51f92e3 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6fbfad47518c825c046f9c3eea50aafa103f48fe..e67d0aca3fe68b23a9f84284c1a017736a708b8a 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0f23c3f2678ea8a8ce85b0bb5873bbd9602a3ad6..7c213b6b64723c92a2954472c4e57f0d61ed8ecd 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b994845ed359bdb7b15f311ba76c45214112aea4..c161dd974f741c1700c501fa6576585034c67e43 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1d1b27a96c5bb1c35cf5cacb58fc8cf26746d47e..49479927e7f746b90beff1c25be79035e1da1427 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2207e624f5388e110c80e9f744352bc7a928c8d7..b56fc9d6f48e3a1180fd234d1e9eacf811202454 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b49ea2a95f4f99f5afde46ee9a1394d16078d675..472030f2b5bb33b56342bed34f25852424834a90 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index dbb03b544e8c46a23637e2283fc808f523342a84..453b00c301773fbb9977efc583f5a9163e54462a 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1b8e941044079214882a75c7ad7a18a564ef9643..77930683ab7ddcf86620c3f3976bacb2faf85273 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2ede656ee26a5837912440f5ba8cc27896428ca2..fcc618aa2061496e089c73fb0c8e760e165bd64b 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 363767cf01e5af3c09e4401fc78166ae4aa78ece..b813fed95e563ccdc0f1e14a2804ccb497ecdd82 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 29e9e99f7fe3010b17f4b4ef0e991d72a6d2f39a..c930edda3f656321d110df599113f6bf0a539ee9 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 118e942005e5d78ea4a98e4c4d0ff3ba5bc9224c..4c2836dc825bef2d175f6d898acb29c25fd3af86 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index cd5288a257a91fe9e6b5b80c55383151fef6cf6f..0fe188e238efaef41b757608a876a6f2b34631e8 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index ab060261b43e66e9ab3c69b20ee0f88c169bf092..c7e3b929aa854b26be7002f661f187c9acc2dc01 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3cde553bcbe182d06e277d26fbd188d4f9fd7237..b6b7f3af29e440b242789d18d49aa002e4eef796 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3af8de3fcea43824a635112871850c0c0d5e1f94..d2964af9ad4df52e9846c02a7e129e7d0f8c0ea2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index daf49f7ea311385a1342327ef732015f3a384f13..a7eee2400ce01a592bcbe736745a6d3f0e6190f4 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 04bd16c08f9e5fc08cf330475c44660d19b02ab3..3101607b4efeaa5721e1cd62752ff27d576e928c 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index fd11018b01687feb839b9253a460395500cca8de..6fa3c8d8fc5f9fd08e8df6f37c781134e7dab842 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index f7da64123ed58b51cbb9679222dadedbb1dcd754..05450656fe3d1606f9b432308910ee1a303e3f51 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d9d72dff2a76aa6308592b43d7a8ca3a20a98014..3f4225e95d9311ffe5a915f5081f686b0619ca4f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1e66d960fc118fcb3cdcc8faf68660fede2e6c6d..e5c5949f908146a5da8d99d80005b877cd2db9dc 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 858fdd6be5982ee205c0bbff92f1d71d95dacf15..e5599f610808373cd4e2ded7684a539227365e4f 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 494027f5c06771d7b97472a8222f5ed5716b0cb8..84bc550f4f1d2bd9b8c3dec7b4dab9fdee013cf2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -54,6 +54,10 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
                                struct acpi_gpe_block_info *gpe_block,
                                void *context);
 
+static acpi_status
+acpi_hw_gpe_enable_write(u8 enable_mask,
+                        struct acpi_gpe_register_info *gpe_register_info);
+
 /******************************************************************************
  *
  * FUNCTION:   acpi_hw_get_gpe_register_bit
@@ -146,7 +150,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
 
        status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
        if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
-               gpe_register_info->enable_mask = enable_mask;
+               gpe_register_info->enable_mask = (u8)enable_mask;
        }
        return (status);
 }
@@ -221,7 +225,7 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info,
 
        /* GPE currently handled? */
 
-       if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) !=
+       if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
            ACPI_GPE_DISPATCH_NONE) {
                local_event_status |= ACPI_EVENT_FLAG_HAS_HANDLER;
        }
index 6aade8e1d2a1fcaccd3cf8add36b6f97d2164049..c5214dec49880cd0678943e15ece6c6211bf3291 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a4c34d2c556b24d14f4954d3afe516a39f9b10d9..3cf77afd142c8c397bedf4c927be4b953a020865 100644 (file)
@@ -6,7 +6,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d590693eb54ecb3701962617d2f8b36f82105c73..7d21cae6d60287b06640d20c5a6dda97a5e3f6d0 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 76ab5c1a814eb3d1696dd298fb940ffe1c4e04a8..675c709a300b876d7f4dc04a1978ccc40dec53e5 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6b919127cd9dcf650ba0bf52faa7c8f95a201aa0..2bd33fe56cb36c8f3b5670c5e02440c3068cc799 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 96d007df65ec15d537db26319d472938d988ff1e..5f97468df8ff04e6bf0856a7b64324c16992cfb8 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6921c7f3d208f290ed0a1db3a0ce72181bf6537f..3b3767698827f9d2553af49a3bdbdbae9f331613 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index f1249e3463bedc78122250d4f63a5e531588ef93..24fa19a76d704102b89ada50aa836dbf95d0171e 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 607eb9e5150de6261a849b285e58dc9fdffc636b..e107f929d9cf1e4ebe083df79fe0db1b9887afd1 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 80fcfc8c9c1b79039dae19cf05ee5974a4a9ceed..5d347a71bd0b3915627c65c3b05db27426941e97 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b55642c4ee585c5fe5b61d25cbd0fd7f4ddbcbc4..1a8b39c8d969d9567654c23cff984285b241f0c1 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3d88ef4a3e0d2afd93cd9b575adb9d7e49da9cd4..80f097eb7381618710653804357a1a88b84e30cb 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 42d37109aa5d0fdbb18bc0f72ddb9ffe50e364a1..7dc367e6fe09d9a4ca0a7dbb8d3637f3e3873834 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index e634a05974db6a52d4e6bfc2e25996703074c4a8..7bcc68f57afa61d24d6d170e04bb9c2db8271900 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a3fb7e4c08092d05abe5162b85d3c1a6297b9c94..4a85c45179883b14421f324d4fcc2b9f923f0309 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7c9d0181f341ea5f762858dac80f5cadb28e24c8..bd6cd4a81316cf7240a48c5f6ee1863383b17674 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7eee0a6f02f60d28b96754338ad47120f272fe14..d293d974803639826fe27a83a7c130ad5c984ed5 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a42ee9d6970d727733cc260912352f56ba89cd22..677bc9330e647c413d34ac958ee4ababbf762300 100644 (file)
@@ -6,7 +6,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index e83cff31754bba937e292433e796922eecbfeb45..c95a119767b56fad8569b31d904c42edb643361f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 392910ffbed9ac92e16953b76ad42768a4276846..0eb54315b4bebbd131c5b4c9ae40909ea0ca61b7 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1b13b921dda933e373b3e237886e55269df94aa7..8b79958b7aca35609d54f762d191dcc386cf04d0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7e417aa5c91e2784cec2187e04efa31d103ce1c0..151fcd95ba849e8fd6a5fe5ccdd29e418bf043bd 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b09e6bef72b88471352fede319e0c690591efd97..c30672d238789668fdfd70ea7ad512838e3e7f55 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index af1cc42a8aa183bbae45f2b3794e1acfec888f51..4a9d4a66016e51eefcb851979489bb871edab2a2 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4a5e3f5c0ff78afede734885a02da7da73f25d44..6ad02008c0c23ae9cb7bbd3f4d8fa86b348f2dc6 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4758a1f2ce22abb098193bf59495eb75324efdf8..c68609a2bc1b6d26bf3b9a3af3e4a4a74d31dcee 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4bd558bf10d226efa496b9974c08395530aae9f3..b6030a2deee1248ab9b783bc3d30ce7ac9fbaa75 100644 (file)
@@ -6,7 +6,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8c6c11ce9760b889dca1c9442cade74c7ae90054..d66c326485d82e769d5c93d4692ac0b02ca277f1 100644 (file)
@@ -6,7 +6,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index dae9401be7a2d6cef946f4d3ca3fab173165cea6..793383501f81f05cffd099559a0b6d8787f53880 100644 (file)
@@ -6,7 +6,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
 #define _COMPONENT          ACPI_NAMESPACE
 ACPI_MODULE_NAME("nsxfobj")
 
-/*******************************************************************************
- *
- * FUNCTION:    acpi_get_id
- *
- * PARAMETERS:  Handle          - Handle of object whose id is desired
- *              ret_id          - Where the id will be placed
- *
- * RETURN:      Status
- *
- * DESCRIPTION: This routine returns the owner id associated with a handle
- *
- ******************************************************************************/
-acpi_status acpi_get_id(acpi_handle handle, acpi_owner_id * ret_id)
-{
-       struct acpi_namespace_node *node;
-       acpi_status status;
-
-       /* Parameter Validation */
-
-       if (!ret_id) {
-               return (AE_BAD_PARAMETER);
-       }
-
-       status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
-       if (ACPI_FAILURE(status)) {
-               return (status);
-       }
-
-       /* Convert and validate the handle */
-
-       node = acpi_ns_validate_handle(handle);
-       if (!node) {
-               (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
-               return (AE_BAD_PARAMETER);
-       }
-
-       *ret_id = node->owner_id;
-
-       status = acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
-       return (status);
-}
-
-ACPI_EXPORT_SYMBOL(acpi_get_id)
-
 /*******************************************************************************
  *
  * FUNCTION:    acpi_get_type
index 314d314340ae8a9151f254ba7b65715baeaf5bdb..6d038770577b3f17fe6d54a4abb035162f4dd4cf 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b058e2390fdd6f3558f15e628273558fedd5acda..90437227d79095fdaec394eb101c90d73543f961 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a6885077d59ee7373c7ff8c6fcc1fb327a455eb3..2f5ddd806c58bac51c88f16c53705c6213108c47 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1755d2ac5656ae5c975e73b04e912741cef8fdf4..1af4a405e351204a275b1e3850d7264cb6c9e85f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0d8d37ffd04d34d1a8a638d05f511a52f6047969..e18e7c47f482769a7c0a359727fdae7d002272b7 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6d27b597394e907055417402cb7f35bab3ed63d1..a555f7f7b9a215d2a899f0e656faae857f3fcf76 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 32d250feea214acbef13cc8afab6af017bc47d0b..9d669cc6cb62923809746a7c405dca35f1d890bb 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0b64181e772090fec84aeb2494781459a43fedeb..89984f30addca71dfe02080335daec6ea2c0067d 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3cd48802eede240750a45a3e4a57c3475fc58e99..960505ab409a8b3958336058bb26837b6c9109c0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 9cb07e1e76d9dba5a0b4403350e436606c48bcf4..ba5f691712882673fe76d83384fe29ec360dcda3 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index e135acaa5e1c2cb89ae522d2b8f0b9c4d53f4851..841a5ea0609425ed5c2e97aa72af0118302289ff 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 916fd095ff342b53a11b0c57d16110bd5f108889..66d406e8fe366a28430158dc7b3d9165e930b607 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -74,7 +74,7 @@ struct acpi_rsconvert_info acpi_rs_convert_address16[5] = {
         * Address Translation Offset
         * Address Length
         */
-       {ACPI_RSC_MOVE16, ACPI_RS_OFFSET(data.address16.granularity),
+       {ACPI_RSC_MOVE16, ACPI_RS_OFFSET(data.address16.address.granularity),
         AML_OFFSET(address16.granularity),
         5},
 
@@ -112,7 +112,7 @@ struct acpi_rsconvert_info acpi_rs_convert_address32[5] = {
         * Address Translation Offset
         * Address Length
         */
-       {ACPI_RSC_MOVE32, ACPI_RS_OFFSET(data.address32.granularity),
+       {ACPI_RSC_MOVE32, ACPI_RS_OFFSET(data.address32.address.granularity),
         AML_OFFSET(address32.granularity),
         5},
 
@@ -150,7 +150,7 @@ struct acpi_rsconvert_info acpi_rs_convert_address64[5] = {
         * Address Translation Offset
         * Address Length
         */
-       {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.address64.granularity),
+       {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.address64.address.granularity),
         AML_OFFSET(address64.granularity),
         5},
 
@@ -194,7 +194,8 @@ struct acpi_rsconvert_info acpi_rs_convert_ext_address64[5] = {
         * Address Length
         * Type-Specific Attribute
         */
-       {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.ext_address64.granularity),
+       {ACPI_RSC_MOVE64,
+        ACPI_RS_OFFSET(data.ext_address64.address.granularity),
         AML_OFFSET(ext_address64.granularity),
         6}
 };
index 689556744b034d82ac43c72bfcd70b4f515fbe83..cb739a6949311d771c9113b645b48989fcfce0b4 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 049d9c22a0f9616824451d0beb29823c4b5ac59f..15434e4c9b344411f6b3c0ec8dbb5193f81b9e41 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c3c56b5a9788319cec129216882bd4d9cf8faebc..1539394c8c52c3309dc010a023c706adb2637e44 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2f9332d5c973047c2e9362bfd921a1d2aafb9514..b29d9ec63d1b6fb419645e4b851c6db9647b1912 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -183,15 +183,15 @@ struct acpi_rsdump_info acpi_rs_dump_address16[8] = {
        {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address16),
         "16-Bit WORD Address Space", NULL},
        {ACPI_RSD_ADDRESS, 0, NULL, NULL},
-       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.granularity), "Granularity",
-        NULL},
-       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.minimum), "Address Minimum",
-        NULL},
-       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.maximum), "Address Maximum",
-        NULL},
-       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.translation_offset),
+       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.granularity),
+        "Granularity", NULL},
+       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.minimum),
+        "Address Minimum", NULL},
+       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.maximum),
+        "Address Maximum", NULL},
+       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.translation_offset),
         "Translation Offset", NULL},
-       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address_length),
+       {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.address_length),
         "Address Length", NULL},
        {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address16.resource_source), NULL, NULL}
 };
@@ -200,15 +200,15 @@ struct acpi_rsdump_info acpi_rs_dump_address32[8] = {
        {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address32),
         "32-Bit DWORD Address Space", NULL},
        {ACPI_RSD_ADDRESS, 0, NULL, NULL},
-       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.granularity), "Granularity",
-        NULL},
-       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.minimum), "Address Minimum",
-        NULL},
-       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.maximum), "Address Maximum",
-        NULL},
-       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.translation_offset),
+       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.granularity),
+        "Granularity", NULL},
+       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.minimum),
+        "Address Minimum", NULL},
+       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.maximum),
+        "Address Maximum", NULL},
+       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.translation_offset),
         "Translation Offset", NULL},
-       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address_length),
+       {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.address_length),
         "Address Length", NULL},
        {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address32.resource_source), NULL, NULL}
 };
@@ -217,15 +217,15 @@ struct acpi_rsdump_info acpi_rs_dump_address64[8] = {
        {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address64),
         "64-Bit QWORD Address Space", NULL},
        {ACPI_RSD_ADDRESS, 0, NULL, NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.granularity), "Granularity",
-        NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.minimum), "Address Minimum",
-        NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.maximum), "Address Maximum",
-        NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.translation_offset),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.granularity),
+        "Granularity", NULL},
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.minimum),
+        "Address Minimum", NULL},
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.maximum),
+        "Address Maximum", NULL},
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.translation_offset),
         "Translation Offset", NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address_length),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.address_length),
         "Address Length", NULL},
        {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address64.resource_source), NULL, NULL}
 };
@@ -234,15 +234,16 @@ struct acpi_rsdump_info acpi_rs_dump_ext_address64[8] = {
        {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_ext_address64),
         "64-Bit Extended Address Space", NULL},
        {ACPI_RSD_ADDRESS, 0, NULL, NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.granularity),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.granularity),
         "Granularity", NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.minimum),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.minimum),
         "Address Minimum", NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.maximum),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.maximum),
         "Address Maximum", NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.translation_offset),
+       {ACPI_RSD_UINT64,
+        ACPI_RSD_OFFSET(ext_address64.address.translation_offset),
         "Translation Offset", NULL},
-       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address_length),
+       {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.address_length),
         "Address Length", NULL},
        {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.type_specific),
         "Type-Specific Attribute", NULL}
index 9d3f8a9a24bd1819e8b5ea932df087550aa12f55..edecfc6759798ddf01069915d014a8547a8908c8 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 19d64873290ab63a1bb209bb26c3eefab767d103..5adba018bab05ead6822b86db141bc95af40e4d0 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 3461f7db26dfb561803f1b24f4205b8d93690efe..07cfa70a475b6585173eb1f8eef409af5216872e 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 77291293af64c5bf6f8ed509790a98088aad6035..50d5be2ee0622f28e12fb6f60176fef99c6ed4da 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index eab4483ff5f8e5b44b7f5c2f22b0275925ea7cd1..c6b80862030e948c6e8429cf15edadf616a53c6e 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 41eea4bc089c55c1858467284ff9f43011a9b178..1fe49d22366333b2172e82d622bfe7f6289d8f0d 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 9e8407223d9575ca5dc3d63927c235cf328dd40a..4c8c6fe6ea746cd7d8b45f8eada4357cd747ab6a 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 897a5ceb042009682b2ce109614a695c2f7531f2..ece3cd60cc6a0664608cbe97b1110845ce26330d 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 877ab9202133fd76c65a003daf7592e2486cfb38..8e6276df0226ef5ff5e5297afa0d5708e471c738 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -60,11 +60,11 @@ ACPI_MODULE_NAME("rsxface")
        ACPI_COPY_FIELD(out, in, min_address_fixed);         \
        ACPI_COPY_FIELD(out, in, max_address_fixed);         \
        ACPI_COPY_FIELD(out, in, info);                      \
-       ACPI_COPY_FIELD(out, in, granularity);               \
-       ACPI_COPY_FIELD(out, in, minimum);                   \
-       ACPI_COPY_FIELD(out, in, maximum);                   \
-       ACPI_COPY_FIELD(out, in, translation_offset);        \
-       ACPI_COPY_FIELD(out, in, address_length);            \
+       ACPI_COPY_FIELD(out, in, address.granularity);       \
+       ACPI_COPY_FIELD(out, in, address.minimum);           \
+       ACPI_COPY_FIELD(out, in, address.maximum);           \
+       ACPI_COPY_FIELD(out, in, address.translation_offset); \
+       ACPI_COPY_FIELD(out, in, address.address_length);    \
        ACPI_COPY_FIELD(out, in, resource_source);
 /* Local prototypes */
 static acpi_status
index f499c10ceb4a92549d651955ef8c038d77ac6d02..6a144957aadd65d9095cb2613e120ebbf961832c 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 41519a9580838b1c3915dd93b10e743d39370cbd..7d2486005e3f24fe0ca38d61d8dc36d21362066e 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index cb947700206ca56fb7d4e09cab0998f455a6afaa..0b879fcfef670c535f30ac57cb403ff87dcbf266 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 755b90c40ddf0eb469304a4016768ca1719d0337..9bad45e63a45401bd3f7dd4b5940c420bc63dd0b 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index df3bb20ea3255c9cf745e8ad8e84f4eaae0292f4..ef16c06e5091f0eb5dec9b00bcc9ec7a262f46a6 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6b1ca9991b9049147115992e44c752d97eb1da64..6559a58439c5dfadb0adbb19ac8c016d4197573b 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6482b0ded65214e120ebb51a01f8fc6790ca7353..60e94f87f27aeea917c9705358ea4675666363d9 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -263,45 +263,6 @@ acpi_get_table_header(char *signature,
 
 ACPI_EXPORT_SYMBOL(acpi_get_table_header)
 
-/*******************************************************************************
- *
- * FUNCTION:    acpi_unload_table_id
- *
- * PARAMETERS:  id            - Owner ID of the table to be removed.
- *
- * RETURN:      Status
- *
- * DESCRIPTION: This routine is used to force the unload of a table (by id)
- *
- ******************************************************************************/
-acpi_status acpi_unload_table_id(acpi_owner_id id)
-{
-       int i;
-       acpi_status status = AE_NOT_EXIST;
-
-       ACPI_FUNCTION_TRACE(acpi_unload_table_id);
-
-       /* Find table in the global table list */
-       for (i = 0; i < acpi_gbl_root_table_list.current_table_count; ++i) {
-               if (id != acpi_gbl_root_table_list.tables[i].owner_id) {
-                       continue;
-               }
-               /*
-                * Delete all namespace objects owned by this table. Note that these
-                * objects can appear anywhere in the namespace by virtue of the AML
-                * "Scope" operator. Thus, we need to track ownership by an ID, not
-                * simply a position within the hierarchy
-                */
-               acpi_tb_delete_namespace_by_owner(i);
-               status = acpi_tb_release_owner_id(i);
-               acpi_tb_set_table_loaded_flag(i, FALSE);
-               break;
-       }
-       return_ACPI_STATUS(status);
-}
-
-ACPI_EXPORT_SYMBOL(acpi_unload_table_id)
-
 /*******************************************************************************
  *
  * FUNCTION:    acpi_get_table_with_size
index ab5308b81aa82e1e5d92fe89c36c60441d381326..aadb3002a2ddd9cfb020d2cbcf49919313c1c589 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 43a54af2b548fef2b3ebabadd5fe347d8a6a9ecb..eac52cf14f1ae8f2d71299070fb522edfc1fcf6d 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a1acec9d2ef36b20539ad32f9f0c37dcb79dcab3..1279f50da7572b5f62ca7830394d51356eccb306 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index efac83c606dce04470b9b2c09487a882afff7565..61d8f6d186d11c15c0f6f6c9489cf2b6d7fdae32 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 038ea887f56292c430bce8222697fde5cac243ee..242bd071f007c27a2371590352b1e2f6747bcdec 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 78fde0aac487ee585d2da9dfe0294126ad545a65..eacc5eee362ebd9e7c5290ff6054ee5009ca633f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index ff601c0f7c7a0024048f2e2d10478c6a7104ad13..c37ec5035f4c5f48e207cbd5082db3f55780550d 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index e516254c63b2260deeb1c3824292dd62f0472f1f..57078e3ea9b746568b6bbe7349ccc6636782f975 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 40e923e675fc15a31ea7d014cd511adafdf35607..988e23b7795caaaa40f6a4320310f1d8c1026085 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a3516de213fa5c1bfa5a45dd368c8c06d50aab19..71fce389fd4840c5810241008a07b981d7862456 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8e544d4688cd827c6a3b05ab734f81775ef7fd9d..9ef80f2828e321a951666061239a8960fea531b3 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8fed1482d228b2409e2a5beddd623f71d9e0de61..6c738fa0cd42828d0239a157e6a05426ecbf12af 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0403dcaabaf20f49b9d0141c7b4762f6cb1b8bfc..743a0ae9fb17a655e708953f8ff78175fdbaa904 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4e263a8cc6f07e95e8c456f63a54208d512970ab..7e1168be39fa6d192a8fc18c49df9557d4c6f713 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 77ceac715f28bd40f1f3ad8f4ff8152f53fbf3f0..5e8df9177da44781ac07942d146e47b97e954cda 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 9afa9441b1838cfceaa16e9d3e4443651e30821b..aa448278ba28f91e27ef732a1945e70978ca632f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4b12880e5b11eec311eb4d2d554f5f2132c90088..27431cfc1c4476c6128c90f240bb8d090831486c 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 77120ec9ea860385cf5e3adddd530f83fc790a7f..e402e07b4846690c38f0fec9e0621b62dcd6273f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index dc6e96547f1836c6d3d4b29af712adf27b52a46f..089f78bbd59bd7bcea55f1b21f8ba0e8a72e7a4d 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d44dee6ee10a15b61cbd05573a79f0c3300243c2..f9ff100f0159b89a9e208fdde6759711f6107151 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2e2bb14e1099c1315051aaccd96f42fe277ec1bc..56bbacd576f21429d683dff88832271fcb2900c5 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 82717fff9ffc34bff4161483808f39788ccdae67..37b8b58fcd565e563879f331be4cf983ae4c9f5f 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index dfa9009bfc8704856242b15063fbc23971eb1181..7d83efe1ea296415bf7f57318a239a64d6e856a4 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 685766fc6ca8ae2dbbc4fbe648f08e4f7cd1fcb8..574cd31183132c9e40ae6de1204d9da658d82cc7 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 36bec57ebd23fb93b5b198ef59414ce7a6dac6bd..2959217067cb012beaa8f6de84df8299c094d3fe 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index db30caff130ad2520f4dc1bd5c7ffb3a988b12bb..29e449935a82e5f802a96f8e3c09d257a2bb2529 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0ce3f5a0dd670945208d9e3b949ff8640d541235..82ca9142e10dde2e8edee916091d65986c11c540 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index bc1ff820c7dd5f639bdbebc56bca62d18c0c6c51..b3505dbc715e63e9eb9a5d4dd459de9f92f58466 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1cc97a752c15a320b276d68594f5244a1c22ad29..8274cc16edc370cf75c81a07a8f66e2af979c059 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 6dc54b3c28b07e2fe1b04002540a04a25cddfff1..83b6c52490dc06097d945fb0245189543adb3835 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7d0ee969d781310cf12fa122584fdd294539244a..130dd9f96f0fe72ee03f79f6fa3631d70e608086 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 4dc33130f134bf7ea3df23b215b8130ebd2a5002..c6149a212149d0c270e0f23d16f5cb8cc4beea94 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 49c873c68756bfd94acd42a900d403c1b52f6503..0929187bdce09c74f07e8d1a68f48eed4f2880e6 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 88ef77f3cf88d1f0a94591f48a5f5df3409bef05..306e785f94182c276599cf29b20d8cd5c1dbc83e 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index b1fd6886e439b28ad7987c2a4ce9a3e64a47c6a2..083a76891889244596453059885ef3007aab8320 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2a0f9e04d3a4b128f71143d1e8788c6976388c16..f2606af3364c7f6ca1c580bc3bc4921df46f1b3f 100644 (file)
@@ -5,7 +5,7 @@
  ******************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c0d44d394ca39c63f87f212f0345d0c05d9acdc4..735db11a9b001fdf9811d2ac292ba62bc4e2ff4b 100644 (file)
@@ -1027,7 +1027,6 @@ EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
 
 static struct dev_pm_domain acpi_general_pm_domain = {
        .ops = {
-#ifdef CONFIG_PM
                .runtime_suspend = acpi_subsys_runtime_suspend,
                .runtime_resume = acpi_subsys_runtime_resume,
 #ifdef CONFIG_PM_SLEEP
@@ -1040,7 +1039,6 @@ static struct dev_pm_domain acpi_general_pm_domain = {
                .poweroff = acpi_subsys_suspend,
                .poweroff_late = acpi_subsys_suspend_late,
                .restore_early = acpi_subsys_resume_early,
-#endif
 #endif
        },
 };
index 1b5853f384e268f7efd7928f13d929376094a9a5..14d0c89ada2a0ce8010b67f54ca4befa023a5e50 100644 (file)
@@ -1,8 +1,8 @@
 /*
- *  ec.c - ACPI Embedded Controller Driver (v2.2)
+ *  ec.c - ACPI Embedded Controller Driver (v3)
  *
- *  Copyright (C) 2001-2014 Intel Corporation
- *    Author: 2014       Lv Zheng <lv.zheng@intel.com>
+ *  Copyright (C) 2001-2015 Intel Corporation
+ *    Author: 2014, 2015 Lv Zheng <lv.zheng@intel.com>
  *            2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>
  *            2006       Denis Sadykov <denis.m.sadykov@intel.com>
  *            2004       Luming Yu <luming.yu@intel.com>
@@ -31,6 +31,7 @@
 
 /* Uncomment next line to get verbose printout */
 /* #define DEBUG */
+#define DEBUG_REF 0
 #define pr_fmt(fmt) "ACPI : EC: " fmt
 
 #include <linux/kernel.h>
@@ -71,20 +72,32 @@ enum ec_command {
 #define ACPI_EC_DELAY          500     /* Wait 500ms max. during EC ops */
 #define ACPI_EC_UDELAY_GLK     1000    /* Wait 1ms max. to get global lock */
 #define ACPI_EC_MSI_UDELAY     550     /* Wait 550us for MSI EC */
+#define ACPI_EC_UDELAY_POLL    1000    /* Wait 1ms for EC transaction polling */
 #define ACPI_EC_CLEAR_MAX      100     /* Maximum number of events to query
                                         * when trying to clear the EC */
 
 enum {
-       EC_FLAGS_QUERY_PENDING,         /* Query is pending */
-       EC_FLAGS_GPE_STORM,             /* GPE storm detected */
+       EC_FLAGS_EVENT_ENABLED,         /* Event is enabled */
+       EC_FLAGS_EVENT_PENDING,         /* Event is pending */
+       EC_FLAGS_EVENT_DETECTED,        /* Event is detected */
        EC_FLAGS_HANDLERS_INSTALLED,    /* Handlers for GPE and
                                         * OpReg are installed */
-       EC_FLAGS_BLOCKED,               /* Transactions are blocked */
+       EC_FLAGS_STARTED,               /* Driver is started */
+       EC_FLAGS_STOPPED,               /* Driver is stopped */
+       EC_FLAGS_COMMAND_STORM,         /* GPE storms occurred to the
+                                        * current command processing */
 };
 
 #define ACPI_EC_COMMAND_POLL           0x01 /* Available for command byte */
 #define ACPI_EC_COMMAND_COMPLETE       0x02 /* Completed last byte */
 
+#define ec_debug_ref(ec, fmt, ...)                                     \
+       do {                                                            \
+               if (DEBUG_REF)                                          \
+                       pr_debug("%lu: " fmt, ec->reference_count,      \
+                                ## __VA_ARGS__);                       \
+       } while (0)
+
 /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */
 static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY;
 module_param(ec_delay, uint, 0644);
@@ -105,6 +118,7 @@ struct acpi_ec_query_handler {
        acpi_handle handle;
        void *data;
        u8 query_bit;
+       struct kref kref;
 };
 
 struct transaction {
@@ -117,8 +131,12 @@ struct transaction {
        u8 wlen;
        u8 rlen;
        u8 flags;
+       unsigned long timestamp;
 };
 
+static int acpi_ec_query(struct acpi_ec *ec, u8 *data);
+static void advance_transaction(struct acpi_ec *ec);
+
 struct acpi_ec *boot_ec, *first_ec;
 EXPORT_SYMBOL(first_ec);
 
@@ -129,7 +147,28 @@ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
 static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */
 
 /* --------------------------------------------------------------------------
- *                           Transaction Management
+ *                           Device Flags
+ * -------------------------------------------------------------------------- */
+
+static bool acpi_ec_started(struct acpi_ec *ec)
+{
+       return test_bit(EC_FLAGS_STARTED, &ec->flags) &&
+              !test_bit(EC_FLAGS_STOPPED, &ec->flags);
+}
+
+static bool acpi_ec_flushed(struct acpi_ec *ec)
+{
+       return ec->reference_count == 1;
+}
+
+static bool acpi_ec_has_pending_event(struct acpi_ec *ec)
+{
+       return test_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags) ||
+              test_bit(EC_FLAGS_EVENT_PENDING, &ec->flags);
+}
+
+/* --------------------------------------------------------------------------
+ *                           EC Registers
  * -------------------------------------------------------------------------- */
 
 static inline u8 acpi_ec_read_status(struct acpi_ec *ec)
@@ -151,6 +190,7 @@ static inline u8 acpi_ec_read_data(struct acpi_ec *ec)
 {
        u8 x = inb(ec->data_addr);
 
+       ec->curr->timestamp = jiffies;
        pr_debug("EC_DATA(R) = 0x%2.2x\n", x);
        return x;
 }
@@ -159,12 +199,14 @@ static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command)
 {
        pr_debug("EC_SC(W) = 0x%2.2x\n", command);
        outb(command, ec->command_addr);
+       ec->curr->timestamp = jiffies;
 }
 
 static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data)
 {
        pr_debug("EC_DATA(W) = 0x%2.2x\n", data);
        outb(data, ec->data_addr);
+       ec->curr->timestamp = jiffies;
 }
 
 #ifdef DEBUG
@@ -188,6 +230,203 @@ static const char *acpi_ec_cmd_string(u8 cmd)
 #define acpi_ec_cmd_string(cmd)                "UNDEF"
 #endif
 
+/* --------------------------------------------------------------------------
+ *                           GPE Registers
+ * -------------------------------------------------------------------------- */
+
+static inline bool acpi_ec_is_gpe_raised(struct acpi_ec *ec)
+{
+       acpi_event_status gpe_status = 0;
+
+       (void)acpi_get_gpe_status(NULL, ec->gpe, &gpe_status);
+       return (gpe_status & ACPI_EVENT_FLAG_SET) ? true : false;
+}
+
+static inline void acpi_ec_enable_gpe(struct acpi_ec *ec, bool open)
+{
+       if (open)
+               acpi_enable_gpe(NULL, ec->gpe);
+       else {
+               BUG_ON(ec->reference_count < 1);
+               acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_ENABLE);
+       }
+       if (acpi_ec_is_gpe_raised(ec)) {
+               /*
+                * On some platforms, EN=1 writes cannot trigger GPE. So
+                * software need to manually trigger a pseudo GPE event on
+                * EN=1 writes.
+                */
+               pr_debug("***** Polling quirk *****\n");
+               advance_transaction(ec);
+       }
+}
+
+static inline void acpi_ec_disable_gpe(struct acpi_ec *ec, bool close)
+{
+       if (close)
+               acpi_disable_gpe(NULL, ec->gpe);
+       else {
+               BUG_ON(ec->reference_count < 1);
+               acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE);
+       }
+}
+
+static inline void acpi_ec_clear_gpe(struct acpi_ec *ec)
+{
+       /*
+        * GPE STS is a W1C register, which means:
+        * 1. Software can clear it without worrying about clearing other
+        *    GPEs' STS bits when the hardware sets them in parallel.
+        * 2. As long as software can ensure only clearing it when it is
+        *    set, hardware won't set it in parallel.
+        * So software can clear GPE in any contexts.
+        * Warning: do not move the check into advance_transaction() as the
+        * EC commands will be sent without GPE raised.
+        */
+       if (!acpi_ec_is_gpe_raised(ec))
+               return;
+       acpi_clear_gpe(NULL, ec->gpe);
+}
+
+/* --------------------------------------------------------------------------
+ *                           Transaction Management
+ * -------------------------------------------------------------------------- */
+
+static void acpi_ec_submit_request(struct acpi_ec *ec)
+{
+       ec->reference_count++;
+       if (ec->reference_count == 1)
+               acpi_ec_enable_gpe(ec, true);
+}
+
+static void acpi_ec_complete_request(struct acpi_ec *ec)
+{
+       bool flushed = false;
+
+       ec->reference_count--;
+       if (ec->reference_count == 0)
+               acpi_ec_disable_gpe(ec, true);
+       flushed = acpi_ec_flushed(ec);
+       if (flushed)
+               wake_up(&ec->wait);
+}
+
+static void acpi_ec_set_storm(struct acpi_ec *ec, u8 flag)
+{
+       if (!test_bit(flag, &ec->flags)) {
+               acpi_ec_disable_gpe(ec, false);
+               pr_debug("+++++ Polling enabled +++++\n");
+               set_bit(flag, &ec->flags);
+       }
+}
+
+static void acpi_ec_clear_storm(struct acpi_ec *ec, u8 flag)
+{
+       if (test_bit(flag, &ec->flags)) {
+               clear_bit(flag, &ec->flags);
+               acpi_ec_enable_gpe(ec, false);
+               pr_debug("+++++ Polling disabled +++++\n");
+       }
+}
+
+/*
+ * acpi_ec_submit_flushable_request() - Increase the reference count unless
+ *                                      the flush operation is not in
+ *                                      progress
+ * @ec: the EC device
+ * @allow_event: whether event should be handled
+ *
+ * This function must be used before taking a new action that should hold
+ * the reference count.  If this function returns false, then the action
+ * must be discarded or it will prevent the flush operation from being
+ * completed.
+ *
+ * During flushing, QR_EC command need to pass this check when there is a
+ * pending event, so that the reference count held for the pending event
+ * can be decreased by the completion of the QR_EC command.
+ */
+static bool acpi_ec_submit_flushable_request(struct acpi_ec *ec,
+                                            bool allow_event)
+{
+       if (!acpi_ec_started(ec)) {
+               if (!allow_event || !acpi_ec_has_pending_event(ec))
+                       return false;
+       }
+       acpi_ec_submit_request(ec);
+       return true;
+}
+
+static void acpi_ec_submit_event(struct acpi_ec *ec)
+{
+       if (!test_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags) ||
+           !test_bit(EC_FLAGS_EVENT_ENABLED, &ec->flags))
+               return;
+       /* Hold reference for pending event */
+       if (!acpi_ec_submit_flushable_request(ec, true))
+               return;
+       ec_debug_ref(ec, "Increase event\n");
+       if (!test_and_set_bit(EC_FLAGS_EVENT_PENDING, &ec->flags)) {
+               pr_debug("***** Event query started *****\n");
+               schedule_work(&ec->work);
+               return;
+       }
+       acpi_ec_complete_request(ec);
+       ec_debug_ref(ec, "Decrease event\n");
+}
+
+static void acpi_ec_complete_event(struct acpi_ec *ec)
+{
+       if (ec->curr->command == ACPI_EC_COMMAND_QUERY) {
+               clear_bit(EC_FLAGS_EVENT_PENDING, &ec->flags);
+               pr_debug("***** Event query stopped *****\n");
+               /* Unhold reference for pending event */
+               acpi_ec_complete_request(ec);
+               ec_debug_ref(ec, "Decrease event\n");
+               /* Check if there is another SCI_EVT detected */
+               acpi_ec_submit_event(ec);
+       }
+}
+
+static void acpi_ec_submit_detection(struct acpi_ec *ec)
+{
+       /* Hold reference for query submission */
+       if (!acpi_ec_submit_flushable_request(ec, false))
+               return;
+       ec_debug_ref(ec, "Increase query\n");
+       if (!test_and_set_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags)) {
+               pr_debug("***** Event detection blocked *****\n");
+               acpi_ec_submit_event(ec);
+               return;
+       }
+       acpi_ec_complete_request(ec);
+       ec_debug_ref(ec, "Decrease query\n");
+}
+
+static void acpi_ec_complete_detection(struct acpi_ec *ec)
+{
+       if (ec->curr->command == ACPI_EC_COMMAND_QUERY) {
+               clear_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags);
+               pr_debug("***** Event detetion unblocked *****\n");
+               /* Unhold reference for query submission */
+               acpi_ec_complete_request(ec);
+               ec_debug_ref(ec, "Decrease query\n");
+       }
+}
+
+static void acpi_ec_enable_event(struct acpi_ec *ec)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&ec->lock, flags);
+       set_bit(EC_FLAGS_EVENT_ENABLED, &ec->flags);
+       /*
+        * An event may be pending even with SCI_EVT=0, so QR_EC should
+        * always be issued right after started.
+        */
+       acpi_ec_submit_detection(ec);
+       spin_unlock_irqrestore(&ec->lock, flags);
+}
+
 static int ec_transaction_completed(struct acpi_ec *ec)
 {
        unsigned long flags;
@@ -200,7 +439,7 @@ static int ec_transaction_completed(struct acpi_ec *ec)
        return ret;
 }
 
-static bool advance_transaction(struct acpi_ec *ec)
+static void advance_transaction(struct acpi_ec *ec)
 {
        struct transaction *t;
        u8 status;
@@ -208,6 +447,12 @@ static bool advance_transaction(struct acpi_ec *ec)
 
        pr_debug("===== %s (%d) =====\n",
                 in_interrupt() ? "IRQ" : "TASK", smp_processor_id());
+       /*
+        * By always clearing STS before handling all indications, we can
+        * ensure a hardware STS 0->1 change after this clearing can always
+        * trigger a GPE interrupt.
+        */
+       acpi_ec_clear_gpe(ec);
        status = acpi_ec_read_status(ec);
        t = ec->curr;
        if (!t)
@@ -223,6 +468,7 @@ static bool advance_transaction(struct acpi_ec *ec)
                                t->rdata[t->ri++] = acpi_ec_read_data(ec);
                                if (t->rlen == t->ri) {
                                        t->flags |= ACPI_EC_COMMAND_COMPLETE;
+                                       acpi_ec_complete_event(ec);
                                        if (t->command == ACPI_EC_COMMAND_QUERY)
                                                pr_debug("***** Command(%s) hardware completion *****\n",
                                                         acpi_ec_cmd_string(t->command));
@@ -233,25 +479,29 @@ static bool advance_transaction(struct acpi_ec *ec)
                } else if (t->wlen == t->wi &&
                           (status & ACPI_EC_FLAG_IBF) == 0) {
                        t->flags |= ACPI_EC_COMMAND_COMPLETE;
+                       acpi_ec_complete_event(ec);
                        wakeup = true;
                }
-               return wakeup;
+               goto out;
        } else {
                if (EC_FLAGS_QUERY_HANDSHAKE &&
                    !(status & ACPI_EC_FLAG_SCI) &&
                    (t->command == ACPI_EC_COMMAND_QUERY)) {
                        t->flags |= ACPI_EC_COMMAND_POLL;
+                       acpi_ec_complete_detection(ec);
                        t->rdata[t->ri++] = 0x00;
                        t->flags |= ACPI_EC_COMMAND_COMPLETE;
+                       acpi_ec_complete_event(ec);
                        pr_debug("***** Command(%s) software completion *****\n",
                                 acpi_ec_cmd_string(t->command));
                        wakeup = true;
                } else if ((status & ACPI_EC_FLAG_IBF) == 0) {
                        acpi_ec_write_cmd(ec, t->command);
                        t->flags |= ACPI_EC_COMMAND_POLL;
+                       acpi_ec_complete_detection(ec);
                } else
                        goto err;
-               return wakeup;
+               goto out;
        }
 err:
        /*
@@ -259,28 +509,27 @@ err:
         * otherwise will take a not handled IRQ as a false one.
         */
        if (!(status & ACPI_EC_FLAG_SCI)) {
-               if (in_interrupt() && t)
-                       ++t->irq_count;
+               if (in_interrupt() && t) {
+                       if (t->irq_count < ec_storm_threshold)
+                               ++t->irq_count;
+                       /* Allow triggering on 0 threshold */
+                       if (t->irq_count == ec_storm_threshold)
+                               acpi_ec_set_storm(ec, EC_FLAGS_COMMAND_STORM);
+               }
        }
-       return wakeup;
+out:
+       if (status & ACPI_EC_FLAG_SCI)
+               acpi_ec_submit_detection(ec);
+       if (wakeup && in_interrupt())
+               wake_up(&ec->wait);
 }
 
 static void start_transaction(struct acpi_ec *ec)
 {
        ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0;
        ec->curr->flags = 0;
-       (void)advance_transaction(ec);
-}
-
-static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data);
-
-static int ec_check_sci_sync(struct acpi_ec *ec, u8 state)
-{
-       if (state & ACPI_EC_FLAG_SCI) {
-               if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags))
-                       return acpi_ec_sync_query(ec, NULL);
-       }
-       return 0;
+       ec->curr->timestamp = jiffies;
+       advance_transaction(ec);
 }
 
 static int ec_poll(struct acpi_ec *ec)
@@ -291,20 +540,25 @@ static int ec_poll(struct acpi_ec *ec)
        while (repeat--) {
                unsigned long delay = jiffies +
                        msecs_to_jiffies(ec_delay);
+               unsigned long usecs = ACPI_EC_UDELAY_POLL;
                do {
                        /* don't sleep with disabled interrupts */
                        if (EC_FLAGS_MSI || irqs_disabled()) {
-                               udelay(ACPI_EC_MSI_UDELAY);
+                               usecs = ACPI_EC_MSI_UDELAY;
+                               udelay(usecs);
                                if (ec_transaction_completed(ec))
                                        return 0;
                        } else {
                                if (wait_event_timeout(ec->wait,
                                                ec_transaction_completed(ec),
-                                               msecs_to_jiffies(1)))
+                                               usecs_to_jiffies(usecs)))
                                        return 0;
                        }
                        spin_lock_irqsave(&ec->lock, flags);
-                       (void)advance_transaction(ec);
+                       if (time_after(jiffies,
+                                       ec->curr->timestamp +
+                                       usecs_to_jiffies(usecs)))
+                               advance_transaction(ec);
                        spin_unlock_irqrestore(&ec->lock, flags);
                } while (time_before(jiffies, delay));
                pr_debug("controller reset, restart transaction\n");
@@ -325,21 +579,29 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec,
                udelay(ACPI_EC_MSI_UDELAY);
        /* start transaction */
        spin_lock_irqsave(&ec->lock, tmp);
+       /* Enable GPE for command processing (IBF=0/OBF=1) */
+       if (!acpi_ec_submit_flushable_request(ec, true)) {
+               ret = -EINVAL;
+               goto unlock;
+       }
+       ec_debug_ref(ec, "Increase command\n");
        /* following two actions should be kept atomic */
        ec->curr = t;
        pr_debug("***** Command(%s) started *****\n",
                 acpi_ec_cmd_string(t->command));
        start_transaction(ec);
-       if (ec->curr->command == ACPI_EC_COMMAND_QUERY) {
-               clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
-               pr_debug("***** Event stopped *****\n");
-       }
        spin_unlock_irqrestore(&ec->lock, tmp);
        ret = ec_poll(ec);
        spin_lock_irqsave(&ec->lock, tmp);
+       if (t->irq_count == ec_storm_threshold)
+               acpi_ec_clear_storm(ec, EC_FLAGS_COMMAND_STORM);
        pr_debug("***** Command(%s) stopped *****\n",
                 acpi_ec_cmd_string(t->command));
        ec->curr = NULL;
+       /* Disable GPE for command processing (IBF=0/OBF=1) */
+       acpi_ec_complete_request(ec);
+       ec_debug_ref(ec, "Decrease command\n");
+unlock:
        spin_unlock_irqrestore(&ec->lock, tmp);
        return ret;
 }
@@ -354,10 +616,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
        if (t->rdata)
                memset(t->rdata, 0, t->rlen);
        mutex_lock(&ec->mutex);
-       if (test_bit(EC_FLAGS_BLOCKED, &ec->flags)) {
-               status = -EINVAL;
-               goto unlock;
-       }
        if (ec->global_lock) {
                status = acpi_acquire_global_lock(ACPI_EC_UDELAY_GLK, &glk);
                if (ACPI_FAILURE(status)) {
@@ -365,26 +623,11 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
                        goto unlock;
                }
        }
-       /* disable GPE during transaction if storm is detected */
-       if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) {
-               /* It has to be disabled, so that it doesn't trigger. */
-               acpi_disable_gpe(NULL, ec->gpe);
-       }
 
        status = acpi_ec_transaction_unlocked(ec, t);
 
-       /* check if we received SCI during transaction */
-       ec_check_sci_sync(ec, acpi_ec_read_status(ec));
-       if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) {
+       if (test_bit(EC_FLAGS_COMMAND_STORM, &ec->flags))
                msleep(1);
-               /* It is safe to enable the GPE outside of the transaction. */
-               acpi_enable_gpe(NULL, ec->gpe);
-       } else if (t->irq_count > ec_storm_threshold) {
-               pr_info("GPE storm detected(%d GPEs), "
-                       "transactions will use polling mode\n",
-                       t->irq_count);
-               set_bit(EC_FLAGS_GPE_STORM, &ec->flags);
-       }
        if (ec->global_lock)
                acpi_release_global_lock(glk);
 unlock:
@@ -500,7 +743,7 @@ static void acpi_ec_clear(struct acpi_ec *ec)
        u8 value = 0;
 
        for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {
-               status = acpi_ec_sync_query(ec, &value);
+               status = acpi_ec_query(ec, &value);
                if (status || !value)
                        break;
        }
@@ -511,6 +754,57 @@ static void acpi_ec_clear(struct acpi_ec *ec)
                pr_info("%d stale EC events cleared\n", i);
 }
 
+static void acpi_ec_start(struct acpi_ec *ec, bool resuming)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&ec->lock, flags);
+       if (!test_and_set_bit(EC_FLAGS_STARTED, &ec->flags)) {
+               pr_debug("+++++ Starting EC +++++\n");
+               /* Enable GPE for event processing (SCI_EVT=1) */
+               if (!resuming) {
+                       acpi_ec_submit_request(ec);
+                       ec_debug_ref(ec, "Increase driver\n");
+               }
+               pr_info("+++++ EC started +++++\n");
+       }
+       spin_unlock_irqrestore(&ec->lock, flags);
+}
+
+static bool acpi_ec_stopped(struct acpi_ec *ec)
+{
+       unsigned long flags;
+       bool flushed;
+
+       spin_lock_irqsave(&ec->lock, flags);
+       flushed = acpi_ec_flushed(ec);
+       spin_unlock_irqrestore(&ec->lock, flags);
+       return flushed;
+}
+
+static void acpi_ec_stop(struct acpi_ec *ec, bool suspending)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&ec->lock, flags);
+       if (acpi_ec_started(ec)) {
+               pr_debug("+++++ Stopping EC +++++\n");
+               set_bit(EC_FLAGS_STOPPED, &ec->flags);
+               spin_unlock_irqrestore(&ec->lock, flags);
+               wait_event(ec->wait, acpi_ec_stopped(ec));
+               spin_lock_irqsave(&ec->lock, flags);
+               /* Disable GPE for event processing (SCI_EVT=1) */
+               if (!suspending) {
+                       acpi_ec_complete_request(ec);
+                       ec_debug_ref(ec, "Decrease driver\n");
+               }
+               clear_bit(EC_FLAGS_STARTED, &ec->flags);
+               clear_bit(EC_FLAGS_STOPPED, &ec->flags);
+               pr_info("+++++ EC stopped +++++\n");
+       }
+       spin_unlock_irqrestore(&ec->lock, flags);
+}
+
 void acpi_ec_block_transactions(void)
 {
        struct acpi_ec *ec = first_ec;
@@ -520,7 +814,7 @@ void acpi_ec_block_transactions(void)
 
        mutex_lock(&ec->mutex);
        /* Prevent transactions from being carried out */
-       set_bit(EC_FLAGS_BLOCKED, &ec->flags);
+       acpi_ec_stop(ec, true);
        mutex_unlock(&ec->mutex);
 }
 
@@ -531,14 +825,11 @@ void acpi_ec_unblock_transactions(void)
        if (!ec)
                return;
 
-       mutex_lock(&ec->mutex);
        /* Allow transactions to be carried out again */
-       clear_bit(EC_FLAGS_BLOCKED, &ec->flags);
+       acpi_ec_start(ec, true);
 
        if (EC_FLAGS_CLEAR_ON_RESUME)
                acpi_ec_clear(ec);
-
-       mutex_unlock(&ec->mutex);
 }
 
 void acpi_ec_unblock_transactions_early(void)
@@ -548,36 +839,33 @@ void acpi_ec_unblock_transactions_early(void)
         * atomic context during wakeup, so we don't need to acquire the mutex).
         */
        if (first_ec)
-               clear_bit(EC_FLAGS_BLOCKED, &first_ec->flags);
+               acpi_ec_start(first_ec, true);
 }
 
-static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data)
+/* --------------------------------------------------------------------------
+                                Event Management
+   -------------------------------------------------------------------------- */
+static struct acpi_ec_query_handler *
+acpi_ec_get_query_handler(struct acpi_ec_query_handler *handler)
 {
-       int result;
-       u8 d;
-       struct transaction t = {.command = ACPI_EC_COMMAND_QUERY,
-                               .wdata = NULL, .rdata = &d,
-                               .wlen = 0, .rlen = 1};
+       if (handler)
+               kref_get(&handler->kref);
+       return handler;
+}
 
-       if (!ec || !data)
-               return -EINVAL;
-       /*
-        * Query the EC to find out which _Qxx method we need to evaluate.
-        * Note that successful completion of the query causes the ACPI_EC_SCI
-        * bit to be cleared (and thus clearing the interrupt source).
-        */
-       result = acpi_ec_transaction_unlocked(ec, &t);
-       if (result)
-               return result;
-       if (!d)
-               return -ENODATA;
-       *data = d;
-       return 0;
+static void acpi_ec_query_handler_release(struct kref *kref)
+{
+       struct acpi_ec_query_handler *handler =
+               container_of(kref, struct acpi_ec_query_handler, kref);
+
+       kfree(handler);
+}
+
+static void acpi_ec_put_query_handler(struct acpi_ec_query_handler *handler)
+{
+       kref_put(&handler->kref, acpi_ec_query_handler_release);
 }
 
-/* --------------------------------------------------------------------------
-                                Event Management
-   -------------------------------------------------------------------------- */
 int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
                              acpi_handle handle, acpi_ec_query_func func,
                              void *data)
@@ -593,6 +881,7 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
        handler->func = func;
        handler->data = data;
        mutex_lock(&ec->mutex);
+       kref_init(&handler->kref);
        list_add(&handler->node, &ec->list);
        mutex_unlock(&ec->mutex);
        return 0;
@@ -602,15 +891,18 @@ EXPORT_SYMBOL_GPL(acpi_ec_add_query_handler);
 void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
 {
        struct acpi_ec_query_handler *handler, *tmp;
+       LIST_HEAD(free_list);
 
        mutex_lock(&ec->mutex);
        list_for_each_entry_safe(handler, tmp, &ec->list, node) {
                if (query_bit == handler->query_bit) {
-                       list_del(&handler->node);
-                       kfree(handler);
+                       list_del_init(&handler->node);
+                       list_add(&handler->node, &free_list);
                }
        }
        mutex_unlock(&ec->mutex);
+       list_for_each_entry(handler, &free_list, node)
+               acpi_ec_put_query_handler(handler);
 }
 EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
 
@@ -626,59 +918,58 @@ static void acpi_ec_run(void *cxt)
        else if (handler->handle)
                acpi_evaluate_object(handler->handle, NULL, NULL, NULL);
        pr_debug("##### Query(0x%02x) stopped #####\n", handler->query_bit);
-       kfree(handler);
+       acpi_ec_put_query_handler(handler);
 }
 
-static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)
+static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
 {
        u8 value = 0;
-       int status;
-       struct acpi_ec_query_handler *handler, *copy;
+       int result;
+       acpi_status status;
+       struct acpi_ec_query_handler *handler;
+       struct transaction t = {.command = ACPI_EC_COMMAND_QUERY,
+                               .wdata = NULL, .rdata = &value,
+                               .wlen = 0, .rlen = 1};
 
-       status = acpi_ec_query_unlocked(ec, &value);
+       /*
+        * Query the EC to find out which _Qxx method we need to evaluate.
+        * Note that successful completion of the query causes the ACPI_EC_SCI
+        * bit to be cleared (and thus clearing the interrupt source).
+        */
+       result = acpi_ec_transaction(ec, &t);
+       if (result)
+               return result;
        if (data)
                *data = value;
-       if (status)
-               return status;
+       if (!value)
+               return -ENODATA;
 
+       mutex_lock(&ec->mutex);
        list_for_each_entry(handler, &ec->list, node) {
                if (value == handler->query_bit) {
                        /* have custom handler for this bit */
-                       copy = kmalloc(sizeof(*handler), GFP_KERNEL);
-                       if (!copy)
-                               return -ENOMEM;
-                       memcpy(copy, handler, sizeof(*copy));
+                       handler = acpi_ec_get_query_handler(handler);
                        pr_debug("##### Query(0x%02x) scheduled #####\n",
                                 handler->query_bit);
-                       return acpi_os_execute((copy->func) ?
+                       status = acpi_os_execute((handler->func) ?
                                OSL_NOTIFY_HANDLER : OSL_GPE_HANDLER,
-                               acpi_ec_run, copy);
+                               acpi_ec_run, handler);
+                       if (ACPI_FAILURE(status))
+                               result = -EBUSY;
+                       break;
                }
        }
-       return 0;
-}
-
-static void acpi_ec_gpe_query(void *ec_cxt)
-{
-       struct acpi_ec *ec = ec_cxt;
-
-       if (!ec)
-               return;
-       mutex_lock(&ec->mutex);
-       acpi_ec_sync_query(ec, NULL);
        mutex_unlock(&ec->mutex);
+       return result;
 }
 
-static int ec_check_sci(struct acpi_ec *ec, u8 state)
+static void acpi_ec_gpe_poller(struct work_struct *work)
 {
-       if (state & ACPI_EC_FLAG_SCI) {
-               if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) {
-                       pr_debug("***** Event started *****\n");
-                       return acpi_os_execute(OSL_NOTIFY_HANDLER,
-                               acpi_ec_gpe_query, ec);
-               }
-       }
-       return 0;
+       struct acpi_ec *ec = container_of(work, struct acpi_ec, work);
+
+       pr_debug("***** Event poller started *****\n");
+       acpi_ec_query(ec, NULL);
+       pr_debug("***** Event poller stopped *****\n");
 }
 
 static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,
@@ -688,11 +979,9 @@ static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,
        struct acpi_ec *ec = data;
 
        spin_lock_irqsave(&ec->lock, flags);
-       if (advance_transaction(ec))
-               wake_up(&ec->wait);
+       advance_transaction(ec);
        spin_unlock_irqrestore(&ec->lock, flags);
-       ec_check_sci(ec, acpi_ec_read_status(ec));
-       return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE;
+       return ACPI_INTERRUPT_HANDLED;
 }
 
 /* --------------------------------------------------------------------------
@@ -750,11 +1039,11 @@ static struct acpi_ec *make_acpi_ec(void)
 
        if (!ec)
                return NULL;
-       ec->flags = 1 << EC_FLAGS_QUERY_PENDING;
        mutex_init(&ec->mutex);
        init_waitqueue_head(&ec->wait);
        INIT_LIST_HEAD(&ec->list);
        spin_lock_init(&ec->lock);
+       INIT_WORK(&ec->work, acpi_ec_gpe_poller);
        return ec;
 }
 
@@ -810,13 +1099,13 @@ static int ec_install_handlers(struct acpi_ec *ec)
 
        if (test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags))
                return 0;
-       status = acpi_install_gpe_handler(NULL, ec->gpe,
+       status = acpi_install_gpe_raw_handler(NULL, ec->gpe,
                                  ACPI_GPE_EDGE_TRIGGERED,
                                  &acpi_ec_gpe_handler, ec);
        if (ACPI_FAILURE(status))
                return -ENODEV;
 
-       acpi_enable_gpe(NULL, ec->gpe);
+       acpi_ec_start(ec, false);
        status = acpi_install_address_space_handler(ec->handle,
                                                    ACPI_ADR_SPACE_EC,
                                                    &acpi_ec_space_handler,
@@ -831,7 +1120,7 @@ static int ec_install_handlers(struct acpi_ec *ec)
                        pr_err("Fail in evaluating the _REG object"
                                " of EC device. Broken bios is suspected.\n");
                } else {
-                       acpi_disable_gpe(NULL, ec->gpe);
+                       acpi_ec_stop(ec, false);
                        acpi_remove_gpe_handler(NULL, ec->gpe,
                                &acpi_ec_gpe_handler);
                        return -ENODEV;
@@ -846,7 +1135,7 @@ static void ec_remove_handlers(struct acpi_ec *ec)
 {
        if (!test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags))
                return;
-       acpi_disable_gpe(NULL, ec->gpe);
+       acpi_ec_stop(ec, false);
        if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle,
                                ACPI_ADR_SPACE_EC, &acpi_ec_space_handler)))
                pr_err("failed to remove space handler\n");
@@ -900,14 +1189,11 @@ static int acpi_ec_add(struct acpi_device *device)
        ret = ec_install_handlers(ec);
 
        /* EC is fully operational, allow queries */
-       clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
+       acpi_ec_enable_event(ec);
 
        /* Clear stale _Q events if hardware might require that */
-       if (EC_FLAGS_CLEAR_ON_RESUME) {
-               mutex_lock(&ec->mutex);
+       if (EC_FLAGS_CLEAR_ON_RESUME)
                acpi_ec_clear(ec);
-               mutex_unlock(&ec->mutex);
-       }
        return ret;
 }
 
index 163e82f536fa92337e9da8236e641eee5d90cc5b..56b321aa2b1c8a3b4395623ead294bc967702cf8 100644 (file)
@@ -35,6 +35,13 @@ void acpi_int340x_thermal_init(void);
 int acpi_sysfs_init(void);
 void acpi_container_init(void);
 void acpi_memory_hotplug_init(void);
+#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
+int acpi_ioapic_add(struct acpi_pci_root *root);
+int acpi_ioapic_remove(struct acpi_pci_root *root);
+#else
+static inline int acpi_ioapic_add(struct acpi_pci_root *root) { return 0; }
+static inline int acpi_ioapic_remove(struct acpi_pci_root *root) { return 0; }
+#endif
 #ifdef CONFIG_ACPI_DOCK
 void register_dock_dependent_device(struct acpi_device *adev,
                                    acpi_handle dshandle);
@@ -68,6 +75,8 @@ static inline void acpi_debugfs_init(void) { return; }
 #endif
 void acpi_lpss_init(void);
 
+void acpi_apd_init(void);
+
 acpi_status acpi_hotplug_schedule(struct acpi_device *adev, u32 src);
 bool acpi_queue_hotplug_work(struct work_struct *work);
 void acpi_device_hotplug(struct acpi_device *adev, u32 src);
@@ -122,11 +131,13 @@ struct acpi_ec {
        unsigned long data_addr;
        unsigned long global_lock;
        unsigned long flags;
+       unsigned long reference_count;
        struct mutex mutex;
        wait_queue_head_t wait;
        struct list_head list;
        struct transaction *curr;
        spinlock_t lock;
+       struct work_struct work;
 };
 
 extern struct acpi_ec *first_ec;
diff --git a/drivers/acpi/ioapic.c b/drivers/acpi/ioapic.c
new file mode 100644 (file)
index 0000000..ccdc8db
--- /dev/null
@@ -0,0 +1,229 @@
+/*
+ * IOAPIC/IOxAPIC/IOSAPIC driver
+ *
+ * Copyright (C) 2009 Fujitsu Limited.
+ * (c) Copyright 2009 Hewlett-Packard Development Company, L.P.
+ *
+ * Copyright (C) 2014 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Based on original drivers/pci/ioapic.c
+ *     Yinghai Lu <yinghai@kernel.org>
+ *     Jiang Liu <jiang.liu@intel.com>
+ */
+
+/*
+ * This driver manages I/O APICs added by hotplug after boot.
+ * We try to claim all I/O APIC devices, but those present at boot were
+ * registered when we parsed the ACPI MADT.
+ */
+
+#define pr_fmt(fmt) "ACPI : IOAPIC: " fmt
+
+#include <linux/slab.h>
+#include <linux/acpi.h>
+#include <linux/pci.h>
+#include <acpi/acpi.h>
+
+struct acpi_pci_ioapic {
+       acpi_handle     root_handle;
+       acpi_handle     handle;
+       u32             gsi_base;
+       struct resource res;
+       struct pci_dev  *pdev;
+       struct list_head list;
+};
+
+static LIST_HEAD(ioapic_list);
+static DEFINE_MUTEX(ioapic_list_lock);
+
+static acpi_status setup_res(struct acpi_resource *acpi_res, void *data)
+{
+       struct resource *res = data;
+       struct resource_win win;
+
+       res->flags = 0;
+       if (acpi_dev_filter_resource_type(acpi_res, IORESOURCE_MEM) == 0)
+               return AE_OK;
+
+       if (!acpi_dev_resource_memory(acpi_res, res)) {
+               if (acpi_dev_resource_address_space(acpi_res, &win) ||
+                   acpi_dev_resource_ext_address_space(acpi_res, &win))
+                       *res = win.res;
+       }
+       if ((res->flags & IORESOURCE_PREFETCH) ||
+           (res->flags & IORESOURCE_DISABLED))
+               res->flags = 0;
+
+       return AE_CTRL_TERMINATE;
+}
+
+static bool acpi_is_ioapic(acpi_handle handle, char **type)
+{
+       acpi_status status;
+       struct acpi_device_info *info;
+       char *hid = NULL;
+       bool match = false;
+
+       if (!acpi_has_method(handle, "_GSB"))
+               return false;
+
+       status = acpi_get_object_info(handle, &info);
+       if (ACPI_SUCCESS(status)) {
+               if (info->valid & ACPI_VALID_HID)
+                       hid = info->hardware_id.string;
+               if (hid) {
+                       if (strcmp(hid, "ACPI0009") == 0) {
+                               *type = "IOxAPIC";
+                               match = true;
+                       } else if (strcmp(hid, "ACPI000A") == 0) {
+                               *type = "IOAPIC";
+                               match = true;
+                       }
+               }
+               kfree(info);
+       }
+
+       return match;
+}
+
+static acpi_status handle_ioapic_add(acpi_handle handle, u32 lvl,
+                                    void *context, void **rv)
+{
+       acpi_status status;
+       unsigned long long gsi_base;
+       struct acpi_pci_ioapic *ioapic;
+       struct pci_dev *dev = NULL;
+       struct resource *res = NULL;
+       char *type = NULL;
+
+       if (!acpi_is_ioapic(handle, &type))
+               return AE_OK;
+
+       mutex_lock(&ioapic_list_lock);
+       list_for_each_entry(ioapic, &ioapic_list, list)
+               if (ioapic->handle == handle) {
+                       mutex_unlock(&ioapic_list_lock);
+                       return AE_OK;
+               }
+
+       status = acpi_evaluate_integer(handle, "_GSB", NULL, &gsi_base);
+       if (ACPI_FAILURE(status)) {
+               acpi_handle_warn(handle, "failed to evaluate _GSB method\n");
+               goto exit;
+       }
+
+       ioapic = kzalloc(sizeof(*ioapic), GFP_KERNEL);
+       if (!ioapic) {
+               pr_err("cannot allocate memory for new IOAPIC\n");
+               goto exit;
+       } else {
+               ioapic->root_handle = (acpi_handle)context;
+               ioapic->handle = handle;
+               ioapic->gsi_base = (u32)gsi_base;
+               INIT_LIST_HEAD(&ioapic->list);
+       }
+
+       if (acpi_ioapic_registered(handle, (u32)gsi_base))
+               goto done;
+
+       dev = acpi_get_pci_dev(handle);
+       if (dev && pci_resource_len(dev, 0)) {
+               if (pci_enable_device(dev) < 0)
+                       goto exit_put;
+               pci_set_master(dev);
+               if (pci_request_region(dev, 0, type))
+                       goto exit_disable;
+               res = &dev->resource[0];
+               ioapic->pdev = dev;
+       } else {
+               pci_dev_put(dev);
+               dev = NULL;
+
+               res = &ioapic->res;
+               acpi_walk_resources(handle, METHOD_NAME__CRS, setup_res, res);
+               if (res->flags == 0) {
+                       acpi_handle_warn(handle, "failed to get resource\n");
+                       goto exit_free;
+               } else if (request_resource(&iomem_resource, res)) {
+                       acpi_handle_warn(handle, "failed to insert resource\n");
+                       goto exit_free;
+               }
+       }
+
+       if (acpi_register_ioapic(handle, res->start, (u32)gsi_base)) {
+               acpi_handle_warn(handle, "failed to register IOAPIC\n");
+               goto exit_release;
+       }
+done:
+       list_add(&ioapic->list, &ioapic_list);
+       mutex_unlock(&ioapic_list_lock);
+
+       if (dev)
+               dev_info(&dev->dev, "%s at %pR, GSI %u\n",
+                        type, res, (u32)gsi_base);
+       else
+               acpi_handle_info(handle, "%s at %pR, GSI %u\n",
+                                type, res, (u32)gsi_base);
+
+       return AE_OK;
+
+exit_release:
+       if (dev)
+               pci_release_region(dev, 0);
+       else
+               release_resource(res);
+exit_disable:
+       if (dev)
+               pci_disable_device(dev);
+exit_put:
+       pci_dev_put(dev);
+exit_free:
+       kfree(ioapic);
+exit:
+       mutex_unlock(&ioapic_list_lock);
+       *(acpi_status *)rv = AE_ERROR;
+       return AE_OK;
+}
+
+int acpi_ioapic_add(struct acpi_pci_root *root)
+{
+       acpi_status status, retval = AE_OK;
+
+       status = acpi_walk_namespace(ACPI_TYPE_DEVICE, root->device->handle,
+                                    UINT_MAX, handle_ioapic_add, NULL,
+                                    root->device->handle, (void **)&retval);
+
+       return ACPI_SUCCESS(status) && ACPI_SUCCESS(retval) ? 0 : -ENODEV;
+}
+
+int acpi_ioapic_remove(struct acpi_pci_root *root)
+{
+       int retval = 0;
+       struct acpi_pci_ioapic *ioapic, *tmp;
+
+       mutex_lock(&ioapic_list_lock);
+       list_for_each_entry_safe(ioapic, tmp, &ioapic_list, list) {
+               if (root->device->handle != ioapic->root_handle)
+                       continue;
+
+               if (acpi_unregister_ioapic(ioapic->handle, ioapic->gsi_base))
+                       retval = -EBUSY;
+
+               if (ioapic->pdev) {
+                       pci_release_region(ioapic->pdev, 0);
+                       pci_disable_device(ioapic->pdev);
+                       pci_dev_put(ioapic->pdev);
+               } else if (ioapic->res.flags && ioapic->res.parent) {
+                       release_resource(&ioapic->res);
+               }
+               list_del(&ioapic->list);
+               kfree(ioapic);
+       }
+       mutex_unlock(&ioapic_list_lock);
+
+       return retval;
+}
index 24b5476449a110ad8c16ef6a246169be06e41b65..1333cbdc3ea2fa673b744c2b81b6c311071be704 100644 (file)
@@ -177,12 +177,7 @@ static int __init slit_valid(struct acpi_table_slit *slit)
 
 static int __init acpi_parse_slit(struct acpi_table_header *table)
 {
-       struct acpi_table_slit *slit;
-
-       if (!table)
-               return -EINVAL;
-
-       slit = (struct acpi_table_slit *)table;
+       struct acpi_table_slit *slit = (struct acpi_table_slit *)table;
 
        if (!slit_valid(slit)) {
                printk(KERN_INFO "ACPI: SLIT table looks invalid. Not used.\n");
@@ -260,11 +255,8 @@ acpi_parse_memory_affinity(struct acpi_subtable_header * header,
 
 static int __init acpi_parse_srat(struct acpi_table_header *table)
 {
-       struct acpi_table_srat *srat;
-       if (!table)
-               return -EINVAL;
+       struct acpi_table_srat *srat = (struct acpi_table_srat *)table;
 
-       srat = (struct acpi_table_srat *)table;
        acpi_srat_revision = srat->header.revision;
 
        /* Real work done in acpi_table_parse_srat below. */
index b1def411c0b89cbf7847b767063c5c2ab528e8a8..e7f718d6918a6a29b775aa97f7a40dc654d47136 100644 (file)
@@ -485,14 +485,6 @@ void acpi_pci_irq_disable(struct pci_dev *dev)
        if (!pin || !dev->irq_managed || dev->irq <= 0)
                return;
 
-       /* Keep IOAPIC pin configuration when suspending */
-       if (dev->dev.power.is_prepared)
-               return;
-#ifdef CONFIG_PM
-       if (dev->dev.power.runtime_status == RPM_SUSPENDING)
-               return;
-#endif
-
        entry = acpi_pci_irq_lookup(dev, pin);
        if (!entry)
                return;
@@ -513,5 +505,6 @@ void acpi_pci_irq_disable(struct pci_dev *dev)
        if (gsi >= 0) {
                acpi_unregister_gsi(gsi);
                dev->irq_managed = 0;
+               dev->irq = 0;
        }
 }
index c6bcb8c719d83ab463fa114b9712cd9d9a456fe2..68a5f712cd198e5cd61b866d2068b9d52a1274f4 100644 (file)
@@ -112,10 +112,10 @@ get_root_bridge_busnr_callback(struct acpi_resource *resource, void *data)
        if (ACPI_FAILURE(status))
                return AE_OK;
 
-       if ((address.address_length > 0) &&
+       if ((address.address.address_length > 0) &&
            (address.resource_type == ACPI_BUS_NUMBER_RANGE)) {
-               res->start = address.minimum;
-               res->end = address.minimum + address.address_length - 1;
+               res->start = address.address.minimum;
+               res->end = address.address.minimum + address.address.address_length - 1;
        }
 
        return AE_OK;
@@ -621,6 +621,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
        if (hotadd) {
                pcibios_resource_survey_bus(root->bus);
                pci_assign_unassigned_root_bus_resources(root->bus);
+               acpi_ioapic_add(root);
        }
 
        pci_lock_rescan_remove();
@@ -644,6 +645,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
 
        pci_stop_root_bus(root->bus);
 
+       WARN_ON(acpi_ioapic_remove(root));
+
        device_set_run_wake(root->bus->bridge, false);
        pci_acpi_remove_bus_pm_notifier(device);
 
index 02e48394276c785aa84c72fcacf231b4b6cc4587..7962651cdbd43a6a7b7e22e75263f90c9567dc80 100644 (file)
@@ -4,6 +4,10 @@
  *
  *     Alex Chiang <achiang@hp.com>
  *     - Unified x86/ia64 implementations
+ *
+ * I/O APIC hotplug support
+ *     Yinghai Lu <yinghai@kernel.org>
+ *     Jiang Liu <jiang.liu@intel.com>
  */
 #include <linux/export.h>
 #include <linux/acpi.h>
 #define _COMPONENT             ACPI_PROCESSOR_COMPONENT
 ACPI_MODULE_NAME("processor_core");
 
+static struct acpi_table_madt *get_madt_table(void)
+{
+       static struct acpi_table_madt *madt;
+       static int read_madt;
+
+       if (!read_madt) {
+               if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_MADT, 0,
+                                       (struct acpi_table_header **)&madt)))
+                       madt = NULL;
+               read_madt++;
+       }
+
+       return madt;
+}
+
 static int map_lapic_id(struct acpi_subtable_header *entry,
                 u32 acpi_id, int *apic_id)
 {
@@ -67,17 +86,10 @@ static int map_lsapic_id(struct acpi_subtable_header *entry,
 static int map_madt_entry(int type, u32 acpi_id)
 {
        unsigned long madt_end, entry;
-       static struct acpi_table_madt *madt;
-       static int read_madt;
        int phys_id = -1;       /* CPU hardware ID */
+       struct acpi_table_madt *madt;
 
-       if (!read_madt) {
-               if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_MADT, 0,
-                                       (struct acpi_table_header **)&madt)))
-                       madt = NULL;
-               read_madt++;
-       }
-
+       madt = get_madt_table();
        if (!madt)
                return phys_id;
 
@@ -203,3 +215,96 @@ int acpi_get_cpuid(acpi_handle handle, int type, u32 acpi_id)
        return acpi_map_cpuid(phys_id, acpi_id);
 }
 EXPORT_SYMBOL_GPL(acpi_get_cpuid);
+
+#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
+static int get_ioapic_id(struct acpi_subtable_header *entry, u32 gsi_base,
+                        u64 *phys_addr, int *ioapic_id)
+{
+       struct acpi_madt_io_apic *ioapic = (struct acpi_madt_io_apic *)entry;
+
+       if (ioapic->global_irq_base != gsi_base)
+               return 0;
+
+       *phys_addr = ioapic->address;
+       *ioapic_id = ioapic->id;
+       return 1;
+}
+
+static int parse_madt_ioapic_entry(u32 gsi_base, u64 *phys_addr)
+{
+       struct acpi_subtable_header *hdr;
+       unsigned long madt_end, entry;
+       struct acpi_table_madt *madt;
+       int apic_id = -1;
+
+       madt = get_madt_table();
+       if (!madt)
+               return apic_id;
+
+       entry = (unsigned long)madt;
+       madt_end = entry + madt->header.length;
+
+       /* Parse all entries looking for a match. */
+       entry += sizeof(struct acpi_table_madt);
+       while (entry + sizeof(struct acpi_subtable_header) < madt_end) {
+               hdr = (struct acpi_subtable_header *)entry;
+               if (hdr->type == ACPI_MADT_TYPE_IO_APIC &&
+                   get_ioapic_id(hdr, gsi_base, phys_addr, &apic_id))
+                       break;
+               else
+                       entry += hdr->length;
+       }
+
+       return apic_id;
+}
+
+static int parse_mat_ioapic_entry(acpi_handle handle, u32 gsi_base,
+                                 u64 *phys_addr)
+{
+       struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+       struct acpi_subtable_header *header;
+       union acpi_object *obj;
+       int apic_id = -1;
+
+       if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
+               goto exit;
+
+       if (!buffer.length || !buffer.pointer)
+               goto exit;
+
+       obj = buffer.pointer;
+       if (obj->type != ACPI_TYPE_BUFFER ||
+           obj->buffer.length < sizeof(struct acpi_subtable_header))
+               goto exit;
+
+       header = (struct acpi_subtable_header *)obj->buffer.pointer;
+       if (header->type == ACPI_MADT_TYPE_IO_APIC)
+               get_ioapic_id(header, gsi_base, phys_addr, &apic_id);
+
+exit:
+       kfree(buffer.pointer);
+       return apic_id;
+}
+
+/**
+ * acpi_get_ioapic_id - Get IOAPIC ID and physical address matching @gsi_base
+ * @handle:    ACPI object for IOAPIC device
+ * @gsi_base:  GSI base to match with
+ * @phys_addr: Pointer to store physical address of matching IOAPIC record
+ *
+ * Walk resources returned by ACPI_MAT method, then ACPI MADT table, to search
+ * for an ACPI IOAPIC record matching @gsi_base.
+ * Return IOAPIC id and store physical address in @phys_addr if found a match,
+ * otherwise return <0.
+ */
+int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr)
+{
+       int apic_id;
+
+       apic_id = parse_mat_ioapic_entry(handle, gsi_base, phys_addr);
+       if (apic_id == -1)
+               apic_id = parse_madt_ioapic_entry(gsi_base, phys_addr);
+
+       return apic_id;
+}
+#endif /* CONFIG_ACPI_HOTPLUG_IOAPIC */
index 87b704e41877daa488eec962f545f32cf0d079fe..c256bd7fbd78fd66c18b61d76d1d83dce1480c31 100644 (file)
@@ -681,15 +681,13 @@ static int acpi_idle_bm_check(void)
 }
 
 /**
- * acpi_idle_do_entry - a helper function that does C2 and C3 type entry
+ * acpi_idle_do_entry - enter idle state using the appropriate method
  * @cx: cstate data
  *
  * Caller disables interrupt before call and enables interrupt after return.
  */
-static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx)
+static void acpi_idle_do_entry(struct acpi_processor_cx *cx)
 {
-       /* Don't trace irqs off for idle */
-       stop_critical_timings();
        if (cx->entry_method == ACPI_CSTATE_FFH) {
                /* Call into architectural FFH based C-state */
                acpi_processor_ffh_cstate_enter(cx);
@@ -703,37 +701,8 @@ static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx)
                   gets asserted in time to freeze execution properly. */
                inl(acpi_gbl_FADT.xpm_timer_block.address);
        }
-       start_critical_timings();
 }
 
-/**
- * acpi_idle_enter_c1 - enters an ACPI C1 state-type
- * @dev: the target CPU
- * @drv: cpuidle driver containing cpuidle state info
- * @index: index of target state
- *
- * This is equivalent to the HALT instruction.
- */
-static int acpi_idle_enter_c1(struct cpuidle_device *dev,
-               struct cpuidle_driver *drv, int index)
-{
-       struct acpi_processor *pr;
-       struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
-
-       pr = __this_cpu_read(processors);
-
-       if (unlikely(!pr))
-               return -EINVAL;
-
-       lapic_timer_state_broadcast(pr, cx, 1);
-       acpi_idle_do_entry(cx);
-
-       lapic_timer_state_broadcast(pr, cx, 0);
-
-       return index;
-}
-
-
 /**
  * acpi_idle_play_dead - enters an ACPI state for long-term idle (i.e. off-lining)
  * @dev: the target CPU
@@ -761,47 +730,11 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
        return 0;
 }
 
-/**
- * acpi_idle_enter_simple - enters an ACPI state without BM handling
- * @dev: the target CPU
- * @drv: cpuidle driver with cpuidle state information
- * @index: the index of suggested state
- */
-static int acpi_idle_enter_simple(struct cpuidle_device *dev,
-               struct cpuidle_driver *drv, int index)
+static bool acpi_idle_fallback_to_c1(struct acpi_processor *pr)
 {
-       struct acpi_processor *pr;
-       struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
-
-       pr = __this_cpu_read(processors);
-
-       if (unlikely(!pr))
-               return -EINVAL;
-
-#ifdef CONFIG_HOTPLUG_CPU
-       if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) &&
-           !pr->flags.has_cst &&
-           !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
-               return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START);
-#endif
-
-       /*
-        * Must be done before busmaster disable as we might need to
-        * access HPET !
-        */
-       lapic_timer_state_broadcast(pr, cx, 1);
-
-       if (cx->type == ACPI_STATE_C3)
-               ACPI_FLUSH_CPU_CACHE();
-
-       /* Tell the scheduler that we are going deep-idle: */
-       sched_clock_idle_sleep_event();
-       acpi_idle_do_entry(cx);
-
-       sched_clock_idle_wakeup_event(0);
-
-       lapic_timer_state_broadcast(pr, cx, 0);
-       return index;
+       return IS_ENABLED(CONFIG_HOTPLUG_CPU) && num_online_cpus() > 1 &&
+               !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED) &&
+               !pr->flags.has_cst;
 }
 
 static int c3_cpu_count;
@@ -809,44 +742,14 @@ static DEFINE_RAW_SPINLOCK(c3_lock);
 
 /**
  * acpi_idle_enter_bm - enters C3 with proper BM handling
- * @dev: the target CPU
- * @drv: cpuidle driver containing state data
- * @index: the index of suggested state
- *
- * If BM is detected, the deepest non-C3 idle state is entered instead.
+ * @pr: Target processor
+ * @cx: Target state context
  */
-static int acpi_idle_enter_bm(struct cpuidle_device *dev,
-               struct cpuidle_driver *drv, int index)
+static void acpi_idle_enter_bm(struct acpi_processor *pr,
+                              struct acpi_processor_cx *cx)
 {
-       struct acpi_processor *pr;
-       struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
-
-       pr = __this_cpu_read(processors);
-
-       if (unlikely(!pr))
-               return -EINVAL;
-
-#ifdef CONFIG_HOTPLUG_CPU
-       if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) &&
-           !pr->flags.has_cst &&
-           !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
-               return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START);
-#endif
-
-       if (!cx->bm_sts_skip && acpi_idle_bm_check()) {
-               if (drv->safe_state_index >= 0) {
-                       return drv->states[drv->safe_state_index].enter(dev,
-                                               drv, drv->safe_state_index);
-               } else {
-                       acpi_safe_halt();
-                       return -EBUSY;
-               }
-       }
-
        acpi_unlazy_tlb(smp_processor_id());
 
-       /* Tell the scheduler that we are going deep-idle: */
-       sched_clock_idle_sleep_event();
        /*
         * Must be done before busmaster disable as we might need to
         * access HPET !
@@ -856,37 +759,71 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,
        /*
         * disable bus master
         * bm_check implies we need ARB_DIS
-        * !bm_check implies we need cache flush
         * bm_control implies whether we can do ARB_DIS
         *
         * That leaves a case where bm_check is set and bm_control is
         * not set. In that case we cannot do much, we enter C3
         * without doing anything.
         */
-       if (pr->flags.bm_check && pr->flags.bm_control) {
+       if (pr->flags.bm_control) {
                raw_spin_lock(&c3_lock);
                c3_cpu_count++;
                /* Disable bus master arbitration when all CPUs are in C3 */
                if (c3_cpu_count == num_online_cpus())
                        acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1);
                raw_spin_unlock(&c3_lock);
-       } else if (!pr->flags.bm_check) {
-               ACPI_FLUSH_CPU_CACHE();
        }
 
        acpi_idle_do_entry(cx);
 
        /* Re-enable bus master arbitration */
-       if (pr->flags.bm_check && pr->flags.bm_control) {
+       if (pr->flags.bm_control) {
                raw_spin_lock(&c3_lock);
                acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);
                c3_cpu_count--;
                raw_spin_unlock(&c3_lock);
        }
 
-       sched_clock_idle_wakeup_event(0);
+       lapic_timer_state_broadcast(pr, cx, 0);
+}
+
+static int acpi_idle_enter(struct cpuidle_device *dev,
+                          struct cpuidle_driver *drv, int index)
+{
+       struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+       struct acpi_processor *pr;
+
+       pr = __this_cpu_read(processors);
+       if (unlikely(!pr))
+               return -EINVAL;
+
+       if (cx->type != ACPI_STATE_C1) {
+               if (acpi_idle_fallback_to_c1(pr)) {
+                       index = CPUIDLE_DRIVER_STATE_START;
+                       cx = per_cpu(acpi_cstate[index], dev->cpu);
+               } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) {
+                       if (cx->bm_sts_skip || !acpi_idle_bm_check()) {
+                               acpi_idle_enter_bm(pr, cx);
+                               return index;
+                       } else if (drv->safe_state_index >= 0) {
+                               index = drv->safe_state_index;
+                               cx = per_cpu(acpi_cstate[index], dev->cpu);
+                       } else {
+                               acpi_safe_halt();
+                               return -EBUSY;
+                       }
+               }
+       }
+
+       lapic_timer_state_broadcast(pr, cx, 1);
+
+       if (cx->type == ACPI_STATE_C3)
+               ACPI_FLUSH_CPU_CACHE();
+
+       acpi_idle_do_entry(cx);
 
        lapic_timer_state_broadcast(pr, cx, 0);
+
        return index;
 }
 
@@ -981,27 +918,12 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
                strncpy(state->desc, cx->desc, CPUIDLE_DESC_LEN);
                state->exit_latency = cx->latency;
                state->target_residency = cx->latency * latency_factor;
+               state->enter = acpi_idle_enter;
 
                state->flags = 0;
-               switch (cx->type) {
-                       case ACPI_STATE_C1:
-
-                       state->enter = acpi_idle_enter_c1;
-                       state->enter_dead = acpi_idle_play_dead;
-                       drv->safe_state_index = count;
-                       break;
-
-                       case ACPI_STATE_C2:
-                       state->enter = acpi_idle_enter_simple;
+               if (cx->type == ACPI_STATE_C1 || cx->type == ACPI_STATE_C2) {
                        state->enter_dead = acpi_idle_play_dead;
                        drv->safe_state_index = count;
-                       break;
-
-                       case ACPI_STATE_C3:
-                       state->enter = pr->flags.bm_check ?
-                                       acpi_idle_enter_bm :
-                                       acpi_idle_enter_simple;
-                       break;
                }
 
                count++;
index 782a0d15c25fa7b89900a06a3bb8eb194ddcd032..4752b99399870efd068a1f1a6c656b1ebe333349 100644 (file)
 #define valid_IRQ(i) (true)
 #endif
 
-static unsigned long acpi_dev_memresource_flags(u64 len, u8 write_protect,
-                                               bool window)
+static bool acpi_dev_resource_len_valid(u64 start, u64 end, u64 len, bool io)
 {
-       unsigned long flags = IORESOURCE_MEM;
+       u64 reslen = end - start + 1;
 
-       if (len == 0)
-               flags |= IORESOURCE_DISABLED;
+       /*
+        * CHECKME: len might be required to check versus a minimum
+        * length as well. 1 for io is fine, but for memory it does
+        * not make any sense at all.
+        */
+       if (len && reslen && reslen == len && start <= end)
+               return true;
 
-       if (write_protect == ACPI_READ_WRITE_MEMORY)
-               flags |= IORESOURCE_MEM_WRITEABLE;
+       pr_info("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
+               io ? "io" : "mem", start, end, len);
+
+       return false;
+}
+
+static void acpi_dev_memresource_flags(struct resource *res, u64 len,
+                                      u8 write_protect)
+{
+       res->flags = IORESOURCE_MEM;
 
-       if (window)
-               flags |= IORESOURCE_WINDOW;
+       if (!acpi_dev_resource_len_valid(res->start, res->end, len, false))
+               res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET;
 
-       return flags;
+       if (write_protect == ACPI_READ_WRITE_MEMORY)
+               res->flags |= IORESOURCE_MEM_WRITEABLE;
 }
 
 static void acpi_dev_get_memresource(struct resource *res, u64 start, u64 len,
@@ -56,7 +69,7 @@ static void acpi_dev_get_memresource(struct resource *res, u64 start, u64 len,
 {
        res->start = start;
        res->end = start + len - 1;
-       res->flags = acpi_dev_memresource_flags(len, write_protect, false);
+       acpi_dev_memresource_flags(res, len, write_protect);
 }
 
 /**
@@ -67,6 +80,11 @@ static void acpi_dev_get_memresource(struct resource *res, u64 start, u64 len,
  * Check if the given ACPI resource object represents a memory resource and
  * if that's the case, use the information in it to populate the generic
  * resource object pointed to by @res.
+ *
+ * Return:
+ * 1) false with res->flags setting to zero: not the expected resource type
+ * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource
+ * 3) true: valid assigned resource
  */
 bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res)
 {
@@ -77,60 +95,52 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res)
        switch (ares->type) {
        case ACPI_RESOURCE_TYPE_MEMORY24:
                memory24 = &ares->data.memory24;
-               if (!memory24->minimum && !memory24->address_length)
-                       return false;
-               acpi_dev_get_memresource(res, memory24->minimum,
-                                        memory24->address_length,
+               acpi_dev_get_memresource(res, memory24->minimum << 8,
+                                        memory24->address_length << 8,
                                         memory24->write_protect);
                break;
        case ACPI_RESOURCE_TYPE_MEMORY32:
                memory32 = &ares->data.memory32;
-               if (!memory32->minimum && !memory32->address_length)
-                       return false;
                acpi_dev_get_memresource(res, memory32->minimum,
                                         memory32->address_length,
                                         memory32->write_protect);
                break;
        case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
                fixed_memory32 = &ares->data.fixed_memory32;
-               if (!fixed_memory32->address && !fixed_memory32->address_length)
-                       return false;
                acpi_dev_get_memresource(res, fixed_memory32->address,
                                         fixed_memory32->address_length,
                                         fixed_memory32->write_protect);
                break;
        default:
+               res->flags = 0;
                return false;
        }
-       return true;
+
+       return !(res->flags & IORESOURCE_DISABLED);
 }
 EXPORT_SYMBOL_GPL(acpi_dev_resource_memory);
 
-static unsigned int acpi_dev_ioresource_flags(u64 start, u64 end, u8 io_decode,
-                                             bool window)
+static void acpi_dev_ioresource_flags(struct resource *res, u64 len,
+                                     u8 io_decode)
 {
-       int flags = IORESOURCE_IO;
+       res->flags = IORESOURCE_IO;
 
-       if (io_decode == ACPI_DECODE_16)
-               flags |= IORESOURCE_IO_16BIT_ADDR;
+       if (!acpi_dev_resource_len_valid(res->start, res->end, len, true))
+               res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET;
 
-       if (start > end || end >= 0x10003)
-               flags |= IORESOURCE_DISABLED;
+       if (res->end >= 0x10003)
+               res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET;
 
-       if (window)
-               flags |= IORESOURCE_WINDOW;
-
-       return flags;
+       if (io_decode == ACPI_DECODE_16)
+               res->flags |= IORESOURCE_IO_16BIT_ADDR;
 }
 
 static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len,
                                    u8 io_decode)
 {
-       u64 end = start + len - 1;
-
        res->start = start;
-       res->end = end;
-       res->flags = acpi_dev_ioresource_flags(start, end, io_decode, false);
+       res->end = start + len - 1;
+       acpi_dev_ioresource_flags(res, len, io_decode);
 }
 
 /**
@@ -141,6 +151,11 @@ static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len,
  * Check if the given ACPI resource object represents an I/O resource and
  * if that's the case, use the information in it to populate the generic
  * resource object pointed to by @res.
+ *
+ * Return:
+ * 1) false with res->flags setting to zero: not the expected resource type
+ * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource
+ * 3) true: valid assigned resource
  */
 bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res)
 {
@@ -150,135 +165,143 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res)
        switch (ares->type) {
        case ACPI_RESOURCE_TYPE_IO:
                io = &ares->data.io;
-               if (!io->minimum && !io->address_length)
-                       return false;
                acpi_dev_get_ioresource(res, io->minimum,
                                        io->address_length,
                                        io->io_decode);
                break;
        case ACPI_RESOURCE_TYPE_FIXED_IO:
                fixed_io = &ares->data.fixed_io;
-               if (!fixed_io->address && !fixed_io->address_length)
-                       return false;
                acpi_dev_get_ioresource(res, fixed_io->address,
                                        fixed_io->address_length,
                                        ACPI_DECODE_10);
                break;
        default:
+               res->flags = 0;
                return false;
        }
-       return true;
+
+       return !(res->flags & IORESOURCE_DISABLED);
 }
 EXPORT_SYMBOL_GPL(acpi_dev_resource_io);
 
-/**
- * acpi_dev_resource_address_space - Extract ACPI address space information.
- * @ares: Input ACPI resource object.
- * @res: Output generic resource object.
- *
- * Check if the given ACPI resource object represents an address space resource
- * and if that's the case, use the information in it to populate the generic
- * resource object pointed to by @res.
- */
-bool acpi_dev_resource_address_space(struct acpi_resource *ares,
-                                    struct resource *res)
+static bool acpi_decode_space(struct resource_win *win,
+                             struct acpi_resource_address *addr,
+                             struct acpi_address64_attribute *attr)
 {
-       acpi_status status;
-       struct acpi_resource_address64 addr;
-       bool window;
-       u64 len;
-       u8 io_decode;
+       u8 iodec = attr->granularity == 0xfff ? ACPI_DECODE_10 : ACPI_DECODE_16;
+       bool wp = addr->info.mem.write_protect;
+       u64 len = attr->address_length;
+       struct resource *res = &win->res;
 
-       switch (ares->type) {
-       case ACPI_RESOURCE_TYPE_ADDRESS16:
-       case ACPI_RESOURCE_TYPE_ADDRESS32:
-       case ACPI_RESOURCE_TYPE_ADDRESS64:
-               break;
-       default:
-               return false;
-       }
+       /*
+        * Filter out invalid descriptor according to ACPI Spec 5.0, section
+        * 6.4.3.5 Address Space Resource Descriptors.
+        */
+       if ((addr->min_address_fixed != addr->max_address_fixed && len) ||
+           (addr->min_address_fixed && addr->max_address_fixed && !len))
+               pr_debug("ACPI: Invalid address space min_addr_fix %d, max_addr_fix %d, len %llx\n",
+                        addr->min_address_fixed, addr->max_address_fixed, len);
 
-       status = acpi_resource_to_address64(ares, &addr);
-       if (ACPI_FAILURE(status))
-               return false;
+       res->start = attr->minimum;
+       res->end = attr->maximum;
 
-       res->start = addr.minimum;
-       res->end = addr.maximum;
-       window = addr.producer_consumer == ACPI_PRODUCER;
+       /*
+        * For bridges that translate addresses across the bridge,
+        * translation_offset is the offset that must be added to the
+        * address on the secondary side to obtain the address on the
+        * primary side. Non-bridge devices must list 0 for all Address
+        * Translation offset bits.
+        */
+       if (addr->producer_consumer == ACPI_PRODUCER) {
+               res->start += attr->translation_offset;
+               res->end += attr->translation_offset;
+       } else if (attr->translation_offset) {
+               pr_debug("ACPI: translation_offset(%lld) is invalid for non-bridge device.\n",
+                        attr->translation_offset);
+       }
 
-       switch(addr.resource_type) {
+       switch (addr->resource_type) {
        case ACPI_MEMORY_RANGE:
-               len = addr.maximum - addr.minimum + 1;
-               res->flags = acpi_dev_memresource_flags(len,
-                                               addr.info.mem.write_protect,
-                                               window);
+               acpi_dev_memresource_flags(res, len, wp);
                break;
        case ACPI_IO_RANGE:
-               io_decode = addr.granularity == 0xfff ?
-                               ACPI_DECODE_10 : ACPI_DECODE_16;
-               res->flags = acpi_dev_ioresource_flags(addr.minimum,
-                                                      addr.maximum,
-                                                      io_decode, window);
+               acpi_dev_ioresource_flags(res, len, iodec);
                break;
        case ACPI_BUS_NUMBER_RANGE:
                res->flags = IORESOURCE_BUS;
                break;
        default:
-               res->flags = 0;
+               return false;
        }
 
-       return true;
+       win->offset = attr->translation_offset;
+
+       if (addr->producer_consumer == ACPI_PRODUCER)
+               res->flags |= IORESOURCE_WINDOW;
+
+       if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
+               res->flags |= IORESOURCE_PREFETCH;
+
+       return !(res->flags & IORESOURCE_DISABLED);
+}
+
+/**
+ * acpi_dev_resource_address_space - Extract ACPI address space information.
+ * @ares: Input ACPI resource object.
+ * @win: Output generic resource object.
+ *
+ * Check if the given ACPI resource object represents an address space resource
+ * and if that's the case, use the information in it to populate the generic
+ * resource object pointed to by @win.
+ *
+ * Return:
+ * 1) false with win->res.flags setting to zero: not the expected resource type
+ * 2) false with IORESOURCE_DISABLED in win->res.flags: valid unassigned
+ *    resource
+ * 3) true: valid assigned resource
+ */
+bool acpi_dev_resource_address_space(struct acpi_resource *ares,
+                                    struct resource_win *win)
+{
+       struct acpi_resource_address64 addr;
+
+       win->res.flags = 0;
+       if (ACPI_FAILURE(acpi_resource_to_address64(ares, &addr)))
+               return false;
+
+       return acpi_decode_space(win, (struct acpi_resource_address *)&addr,
+                                &addr.address);
 }
 EXPORT_SYMBOL_GPL(acpi_dev_resource_address_space);
 
 /**
  * acpi_dev_resource_ext_address_space - Extract ACPI address space information.
  * @ares: Input ACPI resource object.
- * @res: Output generic resource object.
+ * @win: Output generic resource object.
  *
  * Check if the given ACPI resource object represents an extended address space
  * resource and if that's the case, use the information in it to populate the
- * generic resource object pointed to by @res.
+ * generic resource object pointed to by @win.
+ *
+ * Return:
+ * 1) false with win->res.flags setting to zero: not the expected resource type
+ * 2) false with IORESOURCE_DISABLED in win->res.flags: valid unassigned
+ *    resource
+ * 3) true: valid assigned resource
  */
 bool acpi_dev_resource_ext_address_space(struct acpi_resource *ares,
-                                        struct resource *res)
+                                        struct resource_win *win)
 {
        struct acpi_resource_extended_address64 *ext_addr;
-       bool window;
-       u64 len;
-       u8 io_decode;
 
+       win->res.flags = 0;
        if (ares->type != ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64)
                return false;
 
        ext_addr = &ares->data.ext_address64;
 
-       res->start = ext_addr->minimum;
-       res->end = ext_addr->maximum;
-       window = ext_addr->producer_consumer == ACPI_PRODUCER;
-
-       switch(ext_addr->resource_type) {
-       case ACPI_MEMORY_RANGE:
-               len = ext_addr->maximum - ext_addr->minimum + 1;
-               res->flags = acpi_dev_memresource_flags(len,
-                                       ext_addr->info.mem.write_protect,
-                                       window);
-               break;
-       case ACPI_IO_RANGE:
-               io_decode = ext_addr->granularity == 0xfff ?
-                               ACPI_DECODE_10 : ACPI_DECODE_16;
-               res->flags = acpi_dev_ioresource_flags(ext_addr->minimum,
-                                                      ext_addr->maximum,
-                                                      io_decode, window);
-               break;
-       case ACPI_BUS_NUMBER_RANGE:
-               res->flags = IORESOURCE_BUS;
-               break;
-       default:
-               res->flags = 0;
-       }
-
-       return true;
+       return acpi_decode_space(win, (struct acpi_resource_address *)ext_addr,
+                                &ext_addr->address);
 }
 EXPORT_SYMBOL_GPL(acpi_dev_resource_ext_address_space);
 
@@ -310,7 +333,7 @@ static void acpi_dev_irqresource_disabled(struct resource *res, u32 gsi)
 {
        res->start = gsi;
        res->end = gsi;
-       res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED;
+       res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED | IORESOURCE_UNSET;
 }
 
 static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
@@ -369,6 +392,11 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
  * represented by the resource and populate the generic resource object pointed
  * to by @res accordingly.  If the registration of the GSI is not successful,
  * IORESOURCE_DISABLED will be set it that object's flags.
+ *
+ * Return:
+ * 1) false with res->flags setting to zero: not the expected resource type
+ * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource
+ * 3) true: valid assigned resource
  */
 bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
                                 struct resource *res)
@@ -402,6 +430,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
                                         ext_irq->sharable, false);
                break;
        default:
+               res->flags = 0;
                return false;
        }
 
@@ -415,12 +444,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_resource_interrupt);
  */
 void acpi_dev_free_resource_list(struct list_head *list)
 {
-       struct resource_list_entry *rentry, *re;
-
-       list_for_each_entry_safe(rentry, re, list, node) {
-               list_del(&rentry->node);
-               kfree(rentry);
-       }
+       resource_list_free(list);
 }
 EXPORT_SYMBOL_GPL(acpi_dev_free_resource_list);
 
@@ -432,18 +456,19 @@ struct res_proc_context {
        int error;
 };
 
-static acpi_status acpi_dev_new_resource_entry(struct resource *r,
+static acpi_status acpi_dev_new_resource_entry(struct resource_win *win,
                                               struct res_proc_context *c)
 {
-       struct resource_list_entry *rentry;
+       struct resource_entry *rentry;
 
-       rentry = kmalloc(sizeof(*rentry), GFP_KERNEL);
+       rentry = resource_list_create_entry(NULL, 0);
        if (!rentry) {
                c->error = -ENOMEM;
                return AE_NO_MEMORY;
        }
-       rentry->res = *r;
-       list_add_tail(&rentry->node, c->list);
+       *rentry->res = win->res;
+       rentry->offset = win->offset;
+       resource_list_add_tail(rentry, c->list);
        c->count++;
        return AE_OK;
 }
@@ -452,7 +477,8 @@ static acpi_status acpi_dev_process_resource(struct acpi_resource *ares,
                                             void *context)
 {
        struct res_proc_context *c = context;
-       struct resource r;
+       struct resource_win win;
+       struct resource *res = &win.res;
        int i;
 
        if (c->preproc) {
@@ -467,18 +493,18 @@ static acpi_status acpi_dev_process_resource(struct acpi_resource *ares,
                }
        }
 
-       memset(&r, 0, sizeof(r));
+       memset(&win, 0, sizeof(win));
 
-       if (acpi_dev_resource_memory(ares, &r)
-           || acpi_dev_resource_io(ares, &r)
-           || acpi_dev_resource_address_space(ares, &r)
-           || acpi_dev_resource_ext_address_space(ares, &r))
-               return acpi_dev_new_resource_entry(&r, c);
+       if (acpi_dev_resource_memory(ares, res)
+           || acpi_dev_resource_io(ares, res)
+           || acpi_dev_resource_address_space(ares, &win)
+           || acpi_dev_resource_ext_address_space(ares, &win))
+               return acpi_dev_new_resource_entry(&win, c);
 
-       for (i = 0; acpi_dev_resource_interrupt(ares, i, &r); i++) {
+       for (i = 0; acpi_dev_resource_interrupt(ares, i, res); i++) {
                acpi_status status;
 
-               status = acpi_dev_new_resource_entry(&r, c);
+               status = acpi_dev_new_resource_entry(&win, c);
                if (ACPI_FAILURE(status))
                        return status;
        }
@@ -503,7 +529,7 @@ static acpi_status acpi_dev_process_resource(struct acpi_resource *ares,
  * returned as the final error code.
  *
  * The resultant struct resource objects are put on the list pointed to by
- * @list, that must be empty initially, as members of struct resource_list_entry
+ * @list, that must be empty initially, as members of struct resource_entry
  * objects.  Callers of this routine should use %acpi_dev_free_resource_list() to
  * free that list.
  *
@@ -538,3 +564,58 @@ int acpi_dev_get_resources(struct acpi_device *adev, struct list_head *list,
        return c.count;
 }
 EXPORT_SYMBOL_GPL(acpi_dev_get_resources);
+
+/**
+ * acpi_dev_filter_resource_type - Filter ACPI resource according to resource
+ *                                types
+ * @ares: Input ACPI resource object.
+ * @types: Valid resource types of IORESOURCE_XXX
+ *
+ * This is a hepler function to support acpi_dev_get_resources(), which filters
+ * ACPI resource objects according to resource types.
+ */
+int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+                                 unsigned long types)
+{
+       unsigned long type = 0;
+
+       switch (ares->type) {
+       case ACPI_RESOURCE_TYPE_MEMORY24:
+       case ACPI_RESOURCE_TYPE_MEMORY32:
+       case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
+               type = IORESOURCE_MEM;
+               break;
+       case ACPI_RESOURCE_TYPE_IO:
+       case ACPI_RESOURCE_TYPE_FIXED_IO:
+               type = IORESOURCE_IO;
+               break;
+       case ACPI_RESOURCE_TYPE_IRQ:
+       case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+               type = IORESOURCE_IRQ;
+               break;
+       case ACPI_RESOURCE_TYPE_DMA:
+       case ACPI_RESOURCE_TYPE_FIXED_DMA:
+               type = IORESOURCE_DMA;
+               break;
+       case ACPI_RESOURCE_TYPE_GENERIC_REGISTER:
+               type = IORESOURCE_REG;
+               break;
+       case ACPI_RESOURCE_TYPE_ADDRESS16:
+       case ACPI_RESOURCE_TYPE_ADDRESS32:
+       case ACPI_RESOURCE_TYPE_ADDRESS64:
+       case ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64:
+               if (ares->data.address.resource_type == ACPI_MEMORY_RANGE)
+                       type = IORESOURCE_MEM;
+               else if (ares->data.address.resource_type == ACPI_IO_RANGE)
+                       type = IORESOURCE_IO;
+               else if (ares->data.address.resource_type ==
+                        ACPI_BUS_NUMBER_RANGE)
+                       type = IORESOURCE_BUS;
+               break;
+       default:
+               break;
+       }
+
+       return (type & types) ? 0 : 1;
+}
+EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
index dc4d8960684a78f12978d56b911ead4f02a2074a..bbca7830e18a717cd129a9bdddb2f312ae18bb9f 100644 (file)
@@ -2544,6 +2544,7 @@ int __init acpi_scan_init(void)
        acpi_pci_link_init();
        acpi_processor_init();
        acpi_lpss_init();
+       acpi_apd_init();
        acpi_cmos_rtc_init();
        acpi_container_init();
        acpi_memory_hotplug_init();
index 8aa9254a387fb1c70e90857f70412aa44bceaa6b..7f251dd1a6870df8e21c4b3e87d8ec6b193c5663 100644 (file)
@@ -321,7 +321,7 @@ static struct dmi_system_id acpisleep_dmi_table[] __initdata = {
        {},
 };
 
-static void acpi_sleep_dmi_check(void)
+static void __init acpi_sleep_dmi_check(void)
 {
        int year;
 
index 032db459370f85481ba091dc3761e727284fe0f3..88a4f99dd2a7ccc117924a73b47ca0d5a93c8df5 100644 (file)
@@ -522,6 +522,24 @@ static struct dmi_system_id video_dmi_table[] __initdata = {
                DMI_MATCH(DMI_PRODUCT_NAME, "370R4E/370R4V/370R5E/3570RE/370R5V"),
                },
        },
+       {
+        /* https://bugzilla.redhat.com/show_bug.cgi?id=1186097 */
+        .callback = video_disable_native_backlight,
+        .ident = "SAMSUNG 3570R/370R/470R/450R/510R/4450RV",
+        .matches = {
+               DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+               DMI_MATCH(DMI_PRODUCT_NAME, "3570R/370R/470R/450R/510R/4450RV"),
+               },
+       },
+       {
+        /* https://bugzilla.redhat.com/show_bug.cgi?id=1094948 */
+        .callback = video_disable_native_backlight,
+        .ident = "SAMSUNG 730U3E/740U3E",
+        .matches = {
+               DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+               DMI_MATCH(DMI_PRODUCT_NAME, "730U3E/740U3E"),
+               },
+       },
 
        {
         /* https://bugzilla.redhat.com/show_bug.cgi?id=1163574 */
index b0f138806bbc4cb67bc782c075a9228acc500f12..f32b802b98f4b56bc0e733fe1efe5b81b182b009 100644 (file)
@@ -19,8 +19,8 @@
  * @dev: Device to handle.
  *
  * If power.subsys_data is NULL, point it to a new object, otherwise increment
- * its reference counter.  Return 1 if a new object has been created, otherwise
- * return 0 or error code.
+ * its reference counter.  Return 0 if new object has been created or refcount
+ * increased, otherwise negative error code.
  */
 int dev_pm_get_subsys_data(struct device *dev)
 {
@@ -56,13 +56,11 @@ EXPORT_SYMBOL_GPL(dev_pm_get_subsys_data);
  * @dev: Device to handle.
  *
  * If the reference counter of power.subsys_data is zero after dropping the
- * reference, power.subsys_data is removed.  Return 1 if that happens or 0
- * otherwise.
+ * reference, power.subsys_data is removed.
  */
-int dev_pm_put_subsys_data(struct device *dev)
+void dev_pm_put_subsys_data(struct device *dev)
 {
        struct pm_subsys_data *psd;
-       int ret = 1;
 
        spin_lock_irq(&dev->power.lock);
 
@@ -70,18 +68,14 @@ int dev_pm_put_subsys_data(struct device *dev)
        if (!psd)
                goto out;
 
-       if (--psd->refcount == 0) {
+       if (--psd->refcount == 0)
                dev->power.subsys_data = NULL;
-       } else {
+       else
                psd = NULL;
-               ret = 0;
-       }
 
  out:
        spin_unlock_irq(&dev->power.lock);
        kfree(psd);
-
-       return ret;
 }
 EXPORT_SYMBOL_GPL(dev_pm_put_subsys_data);
 
index 0d8780c04a5e4d7c409b2ad1aa2d7e4c0c4f424b..ba4abbe4693c3e29be764d66662295dd300d41c4 100644 (file)
@@ -344,14 +344,7 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
        struct device *dev;
 
        gpd_data = container_of(nb, struct generic_pm_domain_data, nb);
-
-       mutex_lock(&gpd_data->lock);
        dev = gpd_data->base.dev;
-       if (!dev) {
-               mutex_unlock(&gpd_data->lock);
-               return NOTIFY_DONE;
-       }
-       mutex_unlock(&gpd_data->lock);
 
        for (;;) {
                struct generic_pm_domain *genpd;
@@ -1384,25 +1377,66 @@ EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
 
 #endif /* CONFIG_PM_SLEEP */
 
-static struct generic_pm_domain_data *__pm_genpd_alloc_dev_data(struct device *dev)
+static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev,
+                                       struct generic_pm_domain *genpd,
+                                       struct gpd_timing_data *td)
 {
        struct generic_pm_domain_data *gpd_data;
+       int ret;
+
+       ret = dev_pm_get_subsys_data(dev);
+       if (ret)
+               return ERR_PTR(ret);
 
        gpd_data = kzalloc(sizeof(*gpd_data), GFP_KERNEL);
-       if (!gpd_data)
-               return NULL;
+       if (!gpd_data) {
+               ret = -ENOMEM;
+               goto err_put;
+       }
+
+       if (td)
+               gpd_data->td = *td;
 
-       mutex_init(&gpd_data->lock);
+       gpd_data->base.dev = dev;
+       gpd_data->need_restore = -1;
+       gpd_data->td.constraint_changed = true;
+       gpd_data->td.effective_constraint_ns = -1;
        gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier;
-       dev_pm_qos_add_notifier(dev, &gpd_data->nb);
+
+       spin_lock_irq(&dev->power.lock);
+
+       if (dev->power.subsys_data->domain_data) {
+               ret = -EINVAL;
+               goto err_free;
+       }
+
+       dev->power.subsys_data->domain_data = &gpd_data->base;
+       dev->pm_domain = &genpd->domain;
+
+       spin_unlock_irq(&dev->power.lock);
+
        return gpd_data;
+
+ err_free:
+       spin_unlock_irq(&dev->power.lock);
+       kfree(gpd_data);
+ err_put:
+       dev_pm_put_subsys_data(dev);
+       return ERR_PTR(ret);
 }
 
-static void __pm_genpd_free_dev_data(struct device *dev,
-                                    struct generic_pm_domain_data *gpd_data)
+static void genpd_free_dev_data(struct device *dev,
+                               struct generic_pm_domain_data *gpd_data)
 {
-       dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
+       spin_lock_irq(&dev->power.lock);
+
+       dev->pm_domain = NULL;
+       dev->power.subsys_data->domain_data = NULL;
+
+       spin_unlock_irq(&dev->power.lock);
+
        kfree(gpd_data);
+       dev_pm_put_subsys_data(dev);
 }
 
 /**
@@ -1414,8 +1448,7 @@ static void __pm_genpd_free_dev_data(struct device *dev,
 int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
                          struct gpd_timing_data *td)
 {
-       struct generic_pm_domain_data *gpd_data_new, *gpd_data = NULL;
-       struct pm_domain_data *pdd;
+       struct generic_pm_domain_data *gpd_data;
        int ret = 0;
 
        dev_dbg(dev, "%s()\n", __func__);
@@ -1423,9 +1456,9 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
        if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
                return -EINVAL;
 
-       gpd_data_new = __pm_genpd_alloc_dev_data(dev);
-       if (!gpd_data_new)
-               return -ENOMEM;
+       gpd_data = genpd_alloc_dev_data(dev, genpd, td);
+       if (IS_ERR(gpd_data))
+               return PTR_ERR(gpd_data);
 
        genpd_acquire_lock(genpd);
 
@@ -1434,50 +1467,22 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
                goto out;
        }
 
-       list_for_each_entry(pdd, &genpd->dev_list, list_node)
-               if (pdd->dev == dev) {
-                       ret = -EINVAL;
-                       goto out;
-               }
-
-       ret = dev_pm_get_subsys_data(dev);
+       ret = genpd->attach_dev ? genpd->attach_dev(genpd, dev) : 0;
        if (ret)
                goto out;
 
        genpd->device_count++;
        genpd->max_off_time_changed = true;
 
-       spin_lock_irq(&dev->power.lock);
-
-       dev->pm_domain = &genpd->domain;
-       if (dev->power.subsys_data->domain_data) {
-               gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
-       } else {
-               gpd_data = gpd_data_new;
-               dev->power.subsys_data->domain_data = &gpd_data->base;
-       }
-       gpd_data->refcount++;
-       if (td)
-               gpd_data->td = *td;
-
-       spin_unlock_irq(&dev->power.lock);
-
-       if (genpd->attach_dev)
-               genpd->attach_dev(genpd, dev);
-
-       mutex_lock(&gpd_data->lock);
-       gpd_data->base.dev = dev;
        list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
-       gpd_data->need_restore = -1;
-       gpd_data->td.constraint_changed = true;
-       gpd_data->td.effective_constraint_ns = -1;
-       mutex_unlock(&gpd_data->lock);
 
  out:
        genpd_release_lock(genpd);
 
-       if (gpd_data != gpd_data_new)
-               __pm_genpd_free_dev_data(dev, gpd_data_new);
+       if (ret)
+               genpd_free_dev_data(dev, gpd_data);
+       else
+               dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
        return ret;
 }
@@ -1504,7 +1509,6 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 {
        struct generic_pm_domain_data *gpd_data;
        struct pm_domain_data *pdd;
-       bool remove = false;
        int ret = 0;
 
        dev_dbg(dev, "%s()\n", __func__);
@@ -1514,6 +1518,11 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
            ||  pd_to_genpd(dev->pm_domain) != genpd)
                return -EINVAL;
 
+       /* The above validation also means we have existing domain_data. */
+       pdd = dev->power.subsys_data->domain_data;
+       gpd_data = to_gpd_data(pdd);
+       dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
+
        genpd_acquire_lock(genpd);
 
        if (genpd->prepared_count > 0) {
@@ -1527,57 +1536,21 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
        if (genpd->detach_dev)
                genpd->detach_dev(genpd, dev);
 
-       spin_lock_irq(&dev->power.lock);
-
-       dev->pm_domain = NULL;
-       pdd = dev->power.subsys_data->domain_data;
        list_del_init(&pdd->list_node);
-       gpd_data = to_gpd_data(pdd);
-       if (--gpd_data->refcount == 0) {
-               dev->power.subsys_data->domain_data = NULL;
-               remove = true;
-       }
-
-       spin_unlock_irq(&dev->power.lock);
-
-       mutex_lock(&gpd_data->lock);
-       pdd->dev = NULL;
-       mutex_unlock(&gpd_data->lock);
 
        genpd_release_lock(genpd);
 
-       dev_pm_put_subsys_data(dev);
-       if (remove)
-               __pm_genpd_free_dev_data(dev, gpd_data);
+       genpd_free_dev_data(dev, gpd_data);
 
        return 0;
 
  out:
        genpd_release_lock(genpd);
+       dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
        return ret;
 }
 
-/**
- * pm_genpd_dev_need_restore - Set/unset the device's "need restore" flag.
- * @dev: Device to set/unset the flag for.
- * @val: The new value of the device's "need restore" flag.
- */
-void pm_genpd_dev_need_restore(struct device *dev, bool val)
-{
-       struct pm_subsys_data *psd;
-       unsigned long flags;
-
-       spin_lock_irqsave(&dev->power.lock, flags);
-
-       psd = dev_to_psd(dev);
-       if (psd && psd->domain_data)
-               to_gpd_data(psd->domain_data)->need_restore = val ? 1 : 0;
-
-       spin_unlock_irqrestore(&dev->power.lock, flags);
-}
-EXPORT_SYMBOL_GPL(pm_genpd_dev_need_restore);
-
 /**
  * pm_genpd_add_subdomain - Add a subdomain to an I/O PM domain.
  * @genpd: Master PM domain to add the subdomain to.
index 106c69359306b595e74075853ae8cc4e1465924f..15bf29974c312a83fbf18c3a78b4b00f5d858cd5 100644 (file)
@@ -117,20 +117,20 @@ do {                                                                      \
 } while (0)
 
 /**
- * find_device_opp() - find device_opp struct using device pointer
+ * _find_device_opp() - find device_opp struct using device pointer
  * @dev:       device pointer used to lookup device OPPs
  *
  * Search list of device OPPs for one containing matching device. Does a RCU
  * reader operation to grab the pointer needed.
  *
- * Returns pointer to 'struct device_opp' if found, otherwise -ENODEV or
+ * Return: pointer to 'struct device_opp' if found, otherwise -ENODEV or
  * -EINVAL based on type of error.
  *
  * Locking: This function must be called under rcu_read_lock(). device_opp
  * is a RCU protected pointer. This means that device_opp is valid as long
  * as we are under RCU lock.
  */
-static struct device_opp *find_device_opp(struct device *dev)
+static struct device_opp *_find_device_opp(struct device *dev)
 {
        struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV);
 
@@ -153,7 +153,7 @@ static struct device_opp *find_device_opp(struct device *dev)
  * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an available opp
  * @opp:       opp for which voltage has to be returned for
  *
- * Return voltage in micro volt corresponding to the opp, else
+ * Return: voltage in micro volt corresponding to the opp, else
  * return 0
  *
  * Locking: This function must be called under rcu_read_lock(). opp is a rcu
@@ -169,6 +169,8 @@ unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
        struct dev_pm_opp *tmp_opp;
        unsigned long v = 0;
 
+       opp_rcu_lockdep_assert();
+
        tmp_opp = rcu_dereference(opp);
        if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available)
                pr_err("%s: Invalid parameters\n", __func__);
@@ -183,7 +185,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
  * dev_pm_opp_get_freq() - Gets the frequency corresponding to an available opp
  * @opp:       opp for which frequency has to be returned for
  *
- * Return frequency in hertz corresponding to the opp, else
+ * Return: frequency in hertz corresponding to the opp, else
  * return 0
  *
  * Locking: This function must be called under rcu_read_lock(). opp is a rcu
@@ -199,6 +201,8 @@ unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
        struct dev_pm_opp *tmp_opp;
        unsigned long f = 0;
 
+       opp_rcu_lockdep_assert();
+
        tmp_opp = rcu_dereference(opp);
        if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available)
                pr_err("%s: Invalid parameters\n", __func__);
@@ -213,7 +217,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
  * dev_pm_opp_get_opp_count() - Get number of opps available in the opp list
  * @dev:       device for which we do this operation
  *
- * This function returns the number of available opps if there are any,
+ * Return: This function returns the number of available opps if there are any,
  * else returns 0 if none or the corresponding error value.
  *
  * Locking: This function takes rcu_read_lock().
@@ -226,7 +230,7 @@ int dev_pm_opp_get_opp_count(struct device *dev)
 
        rcu_read_lock();
 
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp)) {
                count = PTR_ERR(dev_opp);
                dev_err(dev, "%s: device OPP not found (%d)\n",
@@ -251,9 +255,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
  * @freq:              frequency to search for
  * @available:         true/false - match for available opp
  *
- * Searches for exact match in the opp list and returns pointer to the matching
- * opp if found, else returns ERR_PTR in case of error and should be handled
- * using IS_ERR. Error return values can be:
+ * Return: Searches for exact match in the opp list and returns pointer to the
+ * matching opp if found, else returns ERR_PTR in case of error and should
+ * be handled using IS_ERR. Error return values can be:
  * EINVAL:     for bad pointer
  * ERANGE:     no match found for search
  * ENODEV:     if device not found in list of registered devices
@@ -280,7 +284,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
 
        opp_rcu_lockdep_assert();
 
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp)) {
                int r = PTR_ERR(dev_opp);
                dev_err(dev, "%s: device OPP not found (%d)\n", __func__, r);
@@ -307,7 +311,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
  * Search for the matching ceil *available* OPP from a starting freq
  * for a device.
  *
- * Returns matching *opp and refreshes *freq accordingly, else returns
+ * Return: matching *opp and refreshes *freq accordingly, else returns
  * ERR_PTR in case of error and should be handled using IS_ERR. Error return
  * values can be:
  * EINVAL:     for bad pointer
@@ -333,7 +337,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
                return ERR_PTR(-EINVAL);
        }
 
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp))
                return ERR_CAST(dev_opp);
 
@@ -357,7 +361,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
  * Search for the matching floor *available* OPP from a starting freq
  * for a device.
  *
- * Returns matching *opp and refreshes *freq accordingly, else returns
+ * Return: matching *opp and refreshes *freq accordingly, else returns
  * ERR_PTR in case of error and should be handled using IS_ERR. Error return
  * values can be:
  * EINVAL:     for bad pointer
@@ -383,7 +387,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
                return ERR_PTR(-EINVAL);
        }
 
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp))
                return ERR_CAST(dev_opp);
 
@@ -403,7 +407,16 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
 
-static struct device_opp *add_device_opp(struct device *dev)
+/**
+ * _add_device_opp() - Allocate a new device OPP table
+ * @dev:       device for which we do this operation
+ *
+ * New device node which uses OPPs - used when multiple devices with OPP tables
+ * are maintained.
+ *
+ * Return: valid device_opp pointer if success, else NULL.
+ */
+static struct device_opp *_add_device_opp(struct device *dev)
 {
        struct device_opp *dev_opp;
 
@@ -424,8 +437,35 @@ static struct device_opp *add_device_opp(struct device *dev)
        return dev_opp;
 }
 
-static int dev_pm_opp_add_dynamic(struct device *dev, unsigned long freq,
-                                 unsigned long u_volt, bool dynamic)
+/**
+ * _opp_add_dynamic() - Allocate a dynamic OPP.
+ * @dev:       device for which we do this operation
+ * @freq:      Frequency in Hz for this OPP
+ * @u_volt:    Voltage in uVolts for this OPP
+ * @dynamic:   Dynamically added OPPs.
+ *
+ * This function adds an opp definition to the opp list and returns status.
+ * The opp is made available by default and it can be controlled using
+ * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove.
+ *
+ * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and
+ * freed by of_free_opp_table.
+ *
+ * Locking: The internal device_opp and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ *
+ * Return:
+ * 0           On success OR
+ *             Duplicate OPPs (both freq and volt are same) and opp->available
+ * -EEXIST     Freq are same and volt are different OR
+ *             Duplicate OPPs (both freq and volt are same) and !opp->available
+ * -ENOMEM     Memory allocation failure
+ */
+static int _opp_add_dynamic(struct device *dev, unsigned long freq,
+                           long u_volt, bool dynamic)
 {
        struct device_opp *dev_opp = NULL;
        struct dev_pm_opp *opp, *new_opp;
@@ -449,9 +489,9 @@ static int dev_pm_opp_add_dynamic(struct device *dev, unsigned long freq,
        new_opp->dynamic = dynamic;
 
        /* Check for existing list for 'dev' */
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp)) {
-               dev_opp = add_device_opp(dev);
+               dev_opp = _add_device_opp(dev);
                if (!dev_opp) {
                        ret = -ENOMEM;
                        goto free_opp;
@@ -519,34 +559,53 @@ free_opp:
  * mutex cannot be locked.
  *
  * Return:
- * 0:          On success OR
+ * 0           On success OR
  *             Duplicate OPPs (both freq and volt are same) and opp->available
- * -EEXIST:    Freq are same and volt are different OR
+ * -EEXIST     Freq are same and volt are different OR
  *             Duplicate OPPs (both freq and volt are same) and !opp->available
- * -ENOMEM:    Memory allocation failure
+ * -ENOMEM     Memory allocation failure
  */
 int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
 {
-       return dev_pm_opp_add_dynamic(dev, freq, u_volt, true);
+       return _opp_add_dynamic(dev, freq, u_volt, true);
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_add);
 
-static void kfree_opp_rcu(struct rcu_head *head)
+/**
+ * _kfree_opp_rcu() - Free OPP RCU handler
+ * @head:      RCU head
+ */
+static void _kfree_opp_rcu(struct rcu_head *head)
 {
        struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head);
 
        kfree_rcu(opp, rcu_head);
 }
 
-static void kfree_device_rcu(struct rcu_head *head)
+/**
+ * _kfree_device_rcu() - Free device_opp RCU handler
+ * @head:      RCU head
+ */
+static void _kfree_device_rcu(struct rcu_head *head)
 {
        struct device_opp *device_opp = container_of(head, struct device_opp, rcu_head);
 
        kfree_rcu(device_opp, rcu_head);
 }
 
-static void __dev_pm_opp_remove(struct device_opp *dev_opp,
-                               struct dev_pm_opp *opp)
+/**
+ * _opp_remove()  - Remove an OPP from a table definition
+ * @dev_opp:   points back to the device_opp struct this opp belongs to
+ * @opp:       pointer to the OPP to remove
+ *
+ * This function removes an opp definition from the opp list.
+ *
+ * Locking: The internal device_opp and opp structures are RCU protected.
+ * It is assumed that the caller holds required mutex for an RCU updater
+ * strategy.
+ */
+static void _opp_remove(struct device_opp *dev_opp,
+                       struct dev_pm_opp *opp)
 {
        /*
         * Notify the changes in the availability of the operable
@@ -554,12 +613,12 @@ static void __dev_pm_opp_remove(struct device_opp *dev_opp,
         */
        srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_REMOVE, opp);
        list_del_rcu(&opp->node);
-       call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu);
+       call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu);
 
        if (list_empty(&dev_opp->opp_list)) {
                list_del_rcu(&dev_opp->node);
                call_srcu(&dev_opp->srcu_head.srcu, &dev_opp->rcu_head,
-                         kfree_device_rcu);
+                         _kfree_device_rcu);
        }
 }
 
@@ -569,6 +628,12 @@ static void __dev_pm_opp_remove(struct device_opp *dev_opp,
  * @freq:      OPP to remove with matching 'freq'
  *
  * This function removes an opp from the opp list.
+ *
+ * Locking: The internal device_opp and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
  */
 void dev_pm_opp_remove(struct device *dev, unsigned long freq)
 {
@@ -579,7 +644,7 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
        /* Hold our list modification lock here */
        mutex_lock(&dev_opp_list_lock);
 
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp))
                goto unlock;
 
@@ -596,14 +661,14 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
                goto unlock;
        }
 
-       __dev_pm_opp_remove(dev_opp, opp);
+       _opp_remove(dev_opp, opp);
 unlock:
        mutex_unlock(&dev_opp_list_lock);
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
 
 /**
- * opp_set_availability() - helper to set the availability of an opp
+ * _opp_set_availability() - helper to set the availability of an opp
  * @dev:               device for which we do this operation
  * @freq:              OPP frequency to modify availability
  * @availability_req:  availability status requested for this opp
@@ -611,7 +676,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
  * Set the availability of an OPP with an RCU operation, opp_{enable,disable}
  * share a common logic which is isolated here.
  *
- * Returns -EINVAL for bad pointers, -ENOMEM if no memory available for the
+ * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
  * copy operation, returns 0 if no modifcation was done OR modification was
  * successful.
  *
@@ -621,8 +686,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
  * that this function is *NOT* called under RCU protection or in contexts where
  * mutex locking or synchronize_rcu() blocking calls cannot be used.
  */
-static int opp_set_availability(struct device *dev, unsigned long freq,
-               bool availability_req)
+static int _opp_set_availability(struct device *dev, unsigned long freq,
+                                bool availability_req)
 {
        struct device_opp *dev_opp;
        struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV);
@@ -638,7 +703,7 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
        mutex_lock(&dev_opp_list_lock);
 
        /* Find the device_opp */
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp)) {
                r = PTR_ERR(dev_opp);
                dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r);
@@ -668,7 +733,7 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
 
        list_replace_rcu(&opp->node, &new_opp->node);
        mutex_unlock(&dev_opp_list_lock);
-       call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu);
+       call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu);
 
        /* Notify the change of the OPP availability */
        if (availability_req)
@@ -700,10 +765,14 @@ unlock:
  * integrity of the internal data structures. Callers should ensure that
  * this function is *NOT* called under RCU protection or in contexts where
  * mutex locking or synchronize_rcu() blocking calls cannot be used.
+ *
+ * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
+ * copy operation, returns 0 if no modifcation was done OR modification was
+ * successful.
  */
 int dev_pm_opp_enable(struct device *dev, unsigned long freq)
 {
-       return opp_set_availability(dev, freq, true);
+       return _opp_set_availability(dev, freq, true);
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
 
@@ -722,26 +791,41 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
  * integrity of the internal data structures. Callers should ensure that
  * this function is *NOT* called under RCU protection or in contexts where
  * mutex locking or synchronize_rcu() blocking calls cannot be used.
+ *
+ * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
+ * copy operation, returns 0 if no modifcation was done OR modification was
+ * successful.
  */
 int dev_pm_opp_disable(struct device *dev, unsigned long freq)
 {
-       return opp_set_availability(dev, freq, false);
+       return _opp_set_availability(dev, freq, false);
 }
 EXPORT_SYMBOL_GPL(dev_pm_opp_disable);
 
 /**
  * dev_pm_opp_get_notifier() - find notifier_head of the device with opp
  * @dev:       device pointer used to lookup device OPPs.
+ *
+ * Return: pointer to  notifier head if found, otherwise -ENODEV or
+ * -EINVAL based on type of error casted as pointer. value must be checked
+ *  with IS_ERR to determine valid pointer or error result.
+ *
+ * Locking: This function must be called under rcu_read_lock(). dev_opp is a RCU
+ * protected pointer. The reason for the same is that the opp pointer which is
+ * returned will remain valid for use with opp_get_{voltage, freq} only while
+ * under the locked area. The pointer returned must be used prior to unlocking
+ * with rcu_read_unlock() to maintain the integrity of the pointer.
  */
 struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
 {
-       struct device_opp *dev_opp = find_device_opp(dev);
+       struct device_opp *dev_opp = _find_device_opp(dev);
 
        if (IS_ERR(dev_opp))
                return ERR_CAST(dev_opp); /* matching type */
 
        return &dev_opp->srcu_head;
 }
+EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier);
 
 #ifdef CONFIG_OF
 /**
@@ -749,6 +833,22 @@ struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
  * @dev:       device pointer used to lookup device OPPs.
  *
  * Register the initial OPP table with the OPP library for given device.
+ *
+ * Locking: The internal device_opp and opp structures are RCU protected.
+ * Hence this function indirectly uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ *
+ * Return:
+ * 0           On success OR
+ *             Duplicate OPPs (both freq and volt are same) and opp->available
+ * -EEXIST     Freq are same and volt are different OR
+ *             Duplicate OPPs (both freq and volt are same) and !opp->available
+ * -ENOMEM     Memory allocation failure
+ * -ENODEV     when 'operating-points' property is not found or is invalid data
+ *             in device node.
+ * -ENODATA    when empty 'operating-points' property is found
  */
 int of_init_opp_table(struct device *dev)
 {
@@ -777,7 +877,7 @@ int of_init_opp_table(struct device *dev)
                unsigned long freq = be32_to_cpup(val++) * 1000;
                unsigned long volt = be32_to_cpup(val++);
 
-               if (dev_pm_opp_add_dynamic(dev, freq, volt, false))
+               if (_opp_add_dynamic(dev, freq, volt, false))
                        dev_warn(dev, "%s: Failed to add OPP %ld\n",
                                 __func__, freq);
                nr -= 2;
@@ -792,6 +892,12 @@ EXPORT_SYMBOL_GPL(of_init_opp_table);
  * @dev:       device pointer used to lookup device OPPs.
  *
  * Free OPPs created using static entries present in DT.
+ *
+ * Locking: The internal device_opp and opp structures are RCU protected.
+ * Hence this function indirectly uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
  */
 void of_free_opp_table(struct device *dev)
 {
@@ -799,7 +905,7 @@ void of_free_opp_table(struct device *dev)
        struct dev_pm_opp *opp, *tmp;
 
        /* Check for existing list for 'dev' */
-       dev_opp = find_device_opp(dev);
+       dev_opp = _find_device_opp(dev);
        if (IS_ERR(dev_opp)) {
                int error = PTR_ERR(dev_opp);
                if (error != -ENODEV)
@@ -816,7 +922,7 @@ void of_free_opp_table(struct device *dev)
        /* Free static OPPs */
        list_for_each_entry_safe(opp, tmp, &dev_opp->opp_list, node) {
                if (!opp->dynamic)
-                       __dev_pm_opp_remove(dev_opp, opp);
+                       _opp_remove(dev_opp, opp);
        }
 
        mutex_unlock(&dev_opp_list_lock);
index a8fe4c1a8d07a6e099317c8decd85c3275eff3a2..e56d538d039e4220276ea772d9cd6c2ccd95d1d2 100644 (file)
@@ -64,6 +64,8 @@ enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask)
        struct pm_qos_flags *pqf;
        s32 val;
 
+       lockdep_assert_held(&dev->power.lock);
+
        if (IS_ERR_OR_NULL(qos))
                return PM_QOS_FLAGS_UNDEFINED;
 
@@ -104,6 +106,8 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_flags);
  */
 s32 __dev_pm_qos_read_value(struct device *dev)
 {
+       lockdep_assert_held(&dev->power.lock);
+
        return IS_ERR_OR_NULL(dev->power.qos) ?
                0 : pm_qos_read_value(&dev->power.qos->resume_latency);
 }
index d5d4cd82b9f7dac435af267806ff541777f3ebdb..5c0baa9ffc64f0b3fe4f55663fe473ba6c66d99c 100644 (file)
@@ -976,8 +976,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
        status = acpi_resource_to_address64(res, &addr);
 
        if (ACPI_SUCCESS(status)) {
-               hdp->hd_phys_address = addr.minimum;
-               hdp->hd_address = ioremap(addr.minimum, addr.address_length);
+               hdp->hd_phys_address = addr.address.minimum;
+               hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length);
 
                if (hpet_is_known(hdp)) {
                        iounmap(hdp->hd_address);
index 89ae88f9189532d8313a5fee264de96393851cd0..c59bdcb83217071087cc0a90ef62e5cd01ce38d7 100644 (file)
@@ -57,6 +57,16 @@ config X86_ACPI_CPUFREQ_CPB
          By enabling this option the acpi_cpufreq driver provides the old
          entry in addition to the new boost ones, for compatibility reasons.
 
+config X86_SFI_CPUFREQ
+       tristate "SFI Performance-States driver"
+       depends on X86_INTEL_MID && SFI
+       help
+         This adds a CPUFreq driver for some Silvermont based Intel Atom
+         architectures like Z34xx and Z35xx which enumerate processor
+         performance states through SFI.
+
+         If in doubt, say N.
+
 config ELAN_CPUFREQ
        tristate "AMD Elan SC400 and SC410"
        depends on MELAN
index b3ca7b0b2c33aa1ae65b639da68a7c6d328a980b..8b4220ac888b180ba6ef4b1cda06590c5e3ab59f 100644 (file)
@@ -41,6 +41,7 @@ obj-$(CONFIG_X86_P4_CLOCKMOD)         += p4-clockmod.o
 obj-$(CONFIG_X86_CPUFREQ_NFORCE2)      += cpufreq-nforce2.o
 obj-$(CONFIG_X86_INTEL_PSTATE)         += intel_pstate.o
 obj-$(CONFIG_X86_AMD_FREQ_SENSITIVITY) += amd_freq_sensitivity.o
+obj-$(CONFIG_X86_SFI_CPUFREQ)          += sfi-cpufreq.o
 
 ##################################################################################
 # ARM SoC drivers
index fde97d6e31d6d9749698aaf91bfae821f1a72f9f..bab67db54b7eb4fbed633ef0d65eece0637edcaa 100644 (file)
@@ -320,8 +320,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
 {
        struct private_data *priv = policy->driver_data;
 
-       if (priv->cdev)
-               cpufreq_cooling_unregister(priv->cdev);
+       cpufreq_cooling_unregister(priv->cdev);
        dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
        of_free_opp_table(priv->cpu_dev);
        clk_put(policy->clk);
index 46bed4f81cde882e8f1d3b3dbd1b9418b3de2ee5..28e59a48b35fdb993b28f621bdb0b065c0057e73 100644 (file)
 #include <linux/mutex.h>
 #include <linux/slab.h>
 #include <linux/suspend.h>
+#include <linux/syscore_ops.h>
 #include <linux/tick.h>
 #include <trace/events/power.h>
 
+/* Macros to iterate over lists */
+/* Iterate over online CPUs policies */
+static LIST_HEAD(cpufreq_policy_list);
+#define for_each_policy(__policy)                              \
+       list_for_each_entry(__policy, &cpufreq_policy_list, policy_list)
+
+/* Iterate over governors */
+static LIST_HEAD(cpufreq_governor_list);
+#define for_each_governor(__governor)                          \
+       list_for_each_entry(__governor, &cpufreq_governor_list, governor_list)
+
 /**
  * The "cpufreq driver" - the arch- or hardware-dependent low
  * level driver of CPUFreq support, and its spinlock. This lock
@@ -40,7 +52,6 @@ static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
 static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data_fallback);
 static DEFINE_RWLOCK(cpufreq_driver_lock);
 DEFINE_MUTEX(cpufreq_governor_lock);
-static LIST_HEAD(cpufreq_policy_list);
 
 /* This one keeps track of the previously set governor of a removed CPU */
 static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
@@ -62,7 +73,7 @@ static DECLARE_RWSEM(cpufreq_rwsem);
 /* internal prototypes */
 static int __cpufreq_governor(struct cpufreq_policy *policy,
                unsigned int event);
-static unsigned int __cpufreq_get(unsigned int cpu);
+static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
 static void handle_update(struct work_struct *work);
 
 /**
@@ -93,7 +104,6 @@ void disable_cpufreq(void)
 {
        off = 1;
 }
-static LIST_HEAD(cpufreq_governor_list);
 static DEFINE_MUTEX(cpufreq_governor_mutex);
 
 bool have_governor_per_policy(void)
@@ -202,7 +212,7 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
        struct cpufreq_policy *policy = NULL;
        unsigned long flags;
 
-       if (cpufreq_disabled() || (cpu >= nr_cpu_ids))
+       if (cpu >= nr_cpu_ids)
                return NULL;
 
        if (!down_read_trylock(&cpufreq_rwsem))
@@ -229,9 +239,6 @@ EXPORT_SYMBOL_GPL(cpufreq_cpu_get);
 
 void cpufreq_cpu_put(struct cpufreq_policy *policy)
 {
-       if (cpufreq_disabled())
-               return;
-
        kobject_put(&policy->kobj);
        up_read(&cpufreq_rwsem);
 }
@@ -249,12 +256,12 @@ EXPORT_SYMBOL_GPL(cpufreq_cpu_put);
  * systems as each CPU might be scaled differently. So, use the arch
  * per-CPU loops_per_jiffy value wherever possible.
  */
-#ifndef CONFIG_SMP
-static unsigned long l_p_j_ref;
-static unsigned int l_p_j_ref_freq;
-
 static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
 {
+#ifndef CONFIG_SMP
+       static unsigned long l_p_j_ref;
+       static unsigned int l_p_j_ref_freq;
+
        if (ci->flags & CPUFREQ_CONST_LOOPS)
                return;
 
@@ -270,13 +277,8 @@ static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
                pr_debug("scaling loops_per_jiffy to %lu for frequency %u kHz\n",
                         loops_per_jiffy, ci->new);
        }
-}
-#else
-static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
-{
-       return;
-}
 #endif
+}
 
 static void __cpufreq_notify_transition(struct cpufreq_policy *policy,
                struct cpufreq_freqs *freqs, unsigned int state)
@@ -432,11 +434,11 @@ static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
 }
 define_one_global_rw(boost);
 
-static struct cpufreq_governor *__find_governor(const char *str_governor)
+static struct cpufreq_governor *find_governor(const char *str_governor)
 {
        struct cpufreq_governor *t;
 
-       list_for_each_entry(t, &cpufreq_governor_list, governor_list)
+       for_each_governor(t)
                if (!strncasecmp(str_governor, t->name, CPUFREQ_NAME_LEN))
                        return t;
 
@@ -463,12 +465,12 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
                        *policy = CPUFREQ_POLICY_POWERSAVE;
                        err = 0;
                }
-       } else if (has_target()) {
+       } else {
                struct cpufreq_governor *t;
 
                mutex_lock(&cpufreq_governor_mutex);
 
-               t = __find_governor(str_governor);
+               t = find_governor(str_governor);
 
                if (t == NULL) {
                        int ret;
@@ -478,7 +480,7 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
                        mutex_lock(&cpufreq_governor_mutex);
 
                        if (ret == 0)
-                               t = __find_governor(str_governor);
+                               t = find_governor(str_governor);
                }
 
                if (t != NULL) {
@@ -513,8 +515,7 @@ show_one(cpuinfo_transition_latency, cpuinfo.transition_latency);
 show_one(scaling_min_freq, min);
 show_one(scaling_max_freq, max);
 
-static ssize_t show_scaling_cur_freq(
-       struct cpufreq_policy *policy, char *buf)
+static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf)
 {
        ssize_t ret;
 
@@ -563,7 +564,7 @@ store_one(scaling_max_freq, max);
 static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
                                        char *buf)
 {
-       unsigned int cur_freq = __cpufreq_get(policy->cpu);
+       unsigned int cur_freq = __cpufreq_get(policy);
        if (!cur_freq)
                return sprintf(buf, "<unknown>");
        return sprintf(buf, "%u\n", cur_freq);
@@ -639,7 +640,7 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
                goto out;
        }
 
-       list_for_each_entry(t, &cpufreq_governor_list, governor_list) {
+       for_each_governor(t) {
                if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char))
                    - (CPUFREQ_NAME_LEN + 2)))
                        goto out;
@@ -902,7 +903,7 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy,
 
        /* set up files for this cpu device */
        drv_attr = cpufreq_driver->attr;
-       while ((drv_attr) && (*drv_attr)) {
+       while (drv_attr && *drv_attr) {
                ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr));
                if (ret)
                        return ret;
@@ -936,7 +937,7 @@ static void cpufreq_init_policy(struct cpufreq_policy *policy)
        memcpy(&new_policy, policy, sizeof(*policy));
 
        /* Update governor of new_policy to the governor used before hotplug */
-       gov = __find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu));
+       gov = find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu));
        if (gov)
                pr_debug("Restoring governor %s for cpu %d\n",
                                policy->governor->name, policy->cpu);
@@ -958,7 +959,6 @@ static void cpufreq_init_policy(struct cpufreq_policy *policy)
        }
 }
 
-#ifdef CONFIG_HOTPLUG_CPU
 static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
                                  unsigned int cpu, struct device *dev)
 {
@@ -996,7 +996,6 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
 
        return sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq");
 }
-#endif
 
 static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu)
 {
@@ -1033,6 +1032,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(void)
        init_rwsem(&policy->rwsem);
        spin_lock_init(&policy->transition_lock);
        init_waitqueue_head(&policy->transition_wait);
+       init_completion(&policy->kobj_unregister);
+       INIT_WORK(&policy->update, handle_update);
 
        return policy;
 
@@ -1091,15 +1092,9 @@ static int update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu,
        }
 
        down_write(&policy->rwsem);
-
-       policy->last_cpu = policy->cpu;
        policy->cpu = cpu;
-
        up_write(&policy->rwsem);
 
-       blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
-                       CPUFREQ_UPDATE_POLICY_CPU, policy);
-
        return 0;
 }
 
@@ -1110,41 +1105,32 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
        struct cpufreq_policy *policy;
        unsigned long flags;
        bool recover_policy = cpufreq_suspended;
-#ifdef CONFIG_HOTPLUG_CPU
-       struct cpufreq_policy *tpolicy;
-#endif
 
        if (cpu_is_offline(cpu))
                return 0;
 
        pr_debug("adding CPU %u\n", cpu);
 
-#ifdef CONFIG_SMP
        /* check whether a different CPU already registered this
         * CPU because it is in the same boat. */
-       policy = cpufreq_cpu_get(cpu);
-       if (unlikely(policy)) {
-               cpufreq_cpu_put(policy);
+       policy = cpufreq_cpu_get_raw(cpu);
+       if (unlikely(policy))
                return 0;
-       }
-#endif
 
        if (!down_read_trylock(&cpufreq_rwsem))
                return 0;
 
-#ifdef CONFIG_HOTPLUG_CPU
        /* Check if this cpu was hot-unplugged earlier and has siblings */
        read_lock_irqsave(&cpufreq_driver_lock, flags);
-       list_for_each_entry(tpolicy, &cpufreq_policy_list, policy_list) {
-               if (cpumask_test_cpu(cpu, tpolicy->related_cpus)) {
+       for_each_policy(policy) {
+               if (cpumask_test_cpu(cpu, policy->related_cpus)) {
                        read_unlock_irqrestore(&cpufreq_driver_lock, flags);
-                       ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev);
+                       ret = cpufreq_add_policy_cpu(policy, cpu, dev);
                        up_read(&cpufreq_rwsem);
                        return ret;
                }
        }
        read_unlock_irqrestore(&cpufreq_driver_lock, flags);
-#endif
 
        /*
         * Restore the saved policy when doing light-weight init and fall back
@@ -1171,9 +1157,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
 
        cpumask_copy(policy->cpus, cpumask_of(cpu));
 
-       init_completion(&policy->kobj_unregister);
-       INIT_WORK(&policy->update, handle_update);
-
        /* call driver. From then on the cpufreq must be able
         * to accept all calls to ->verify and ->setpolicy for this CPU
         */
@@ -1371,11 +1354,10 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
                        pr_err("%s: Failed to stop governor\n", __func__);
                        return ret;
                }
-       }
 
-       if (!cpufreq_driver->setpolicy)
                strncpy(per_cpu(cpufreq_cpu_governor, cpu),
                        policy->governor->name, CPUFREQ_NAME_LEN);
+       }
 
        down_read(&policy->rwsem);
        cpus = cpumask_weight(policy->cpus);
@@ -1416,9 +1398,10 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
        unsigned long flags;
        struct cpufreq_policy *policy;
 
-       read_lock_irqsave(&cpufreq_driver_lock, flags);
+       write_lock_irqsave(&cpufreq_driver_lock, flags);
        policy = per_cpu(cpufreq_cpu_data, cpu);
-       read_unlock_irqrestore(&cpufreq_driver_lock, flags);
+       per_cpu(cpufreq_cpu_data, cpu) = NULL;
+       write_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
        if (!policy) {
                pr_debug("%s: No cpu_data found\n", __func__);
@@ -1473,7 +1456,6 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
                }
        }
 
-       per_cpu(cpufreq_cpu_data, cpu) = NULL;
        return 0;
 }
 
@@ -1510,30 +1492,23 @@ static void handle_update(struct work_struct *work)
 /**
  *     cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're
  *     in deep trouble.
- *     @cpu: cpu number
- *     @old_freq: CPU frequency the kernel thinks the CPU runs at
+ *     @policy: policy managing CPUs
  *     @new_freq: CPU frequency the CPU actually runs at
  *
  *     We adjust to current frequency first, and need to clean up later.
  *     So either call to cpufreq_update_policy() or schedule handle_update()).
  */
-static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq,
+static void cpufreq_out_of_sync(struct cpufreq_policy *policy,
                                unsigned int new_freq)
 {
-       struct cpufreq_policy *policy;
        struct cpufreq_freqs freqs;
-       unsigned long flags;
 
        pr_debug("Warning: CPU frequency out of sync: cpufreq and timing core thinks of %u, is %u kHz\n",
-                old_freq, new_freq);
+                policy->cur, new_freq);
 
-       freqs.old = old_freq;
+       freqs.old = policy->cur;
        freqs.new = new_freq;
 
-       read_lock_irqsave(&cpufreq_driver_lock, flags);
-       policy = per_cpu(cpufreq_cpu_data, cpu);
-       read_unlock_irqrestore(&cpufreq_driver_lock, flags);
-
        cpufreq_freq_transition_begin(policy, &freqs);
        cpufreq_freq_transition_end(policy, &freqs, 0);
 }
@@ -1583,22 +1558,21 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu)
 }
 EXPORT_SYMBOL(cpufreq_quick_get_max);
 
-static unsigned int __cpufreq_get(unsigned int cpu)
+static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
 {
-       struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
        unsigned int ret_freq = 0;
 
        if (!cpufreq_driver->get)
                return ret_freq;
 
-       ret_freq = cpufreq_driver->get(cpu);
+       ret_freq = cpufreq_driver->get(policy->cpu);
 
        if (ret_freq && policy->cur &&
                !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) {
                /* verify no discrepancy between actual and
                                        saved value exists */
                if (unlikely(ret_freq != policy->cur)) {
-                       cpufreq_out_of_sync(cpu, policy->cur, ret_freq);
+                       cpufreq_out_of_sync(policy, ret_freq);
                        schedule_work(&policy->update);
                }
        }
@@ -1619,7 +1593,7 @@ unsigned int cpufreq_get(unsigned int cpu)
 
        if (policy) {
                down_read(&policy->rwsem);
-               ret_freq = __cpufreq_get(cpu);
+               ret_freq = __cpufreq_get(policy);
                up_read(&policy->rwsem);
 
                cpufreq_cpu_put(policy);
@@ -1682,7 +1656,7 @@ void cpufreq_suspend(void)
 
        pr_debug("%s: Suspending Governors\n", __func__);
 
-       list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
+       for_each_policy(policy) {
                if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP))
                        pr_err("%s: Failed to stop governor for policy: %p\n",
                                __func__, policy);
@@ -1716,7 +1690,7 @@ void cpufreq_resume(void)
 
        pr_debug("%s: Resuming Governors\n", __func__);
 
-       list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
+       for_each_policy(policy) {
                if (cpufreq_driver->resume && cpufreq_driver->resume(policy))
                        pr_err("%s: Failed to resume driver: %p\n", __func__,
                                policy);
@@ -2006,10 +1980,6 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
 }
 EXPORT_SYMBOL_GPL(cpufreq_driver_target);
 
-/*
- * when "event" is CPUFREQ_GOV_LIMITS
- */
-
 static int __cpufreq_governor(struct cpufreq_policy *policy,
                                        unsigned int event)
 {
@@ -2107,7 +2077,7 @@ int cpufreq_register_governor(struct cpufreq_governor *governor)
 
        governor->initialized = 0;
        err = -EBUSY;
-       if (__find_governor(governor->name) == NULL) {
+       if (!find_governor(governor->name)) {
                err = 0;
                list_add(&governor->governor_list, &cpufreq_governor_list);
        }
@@ -2307,8 +2277,7 @@ int cpufreq_update_policy(unsigned int cpu)
                        policy->cur = new_policy.cur;
                } else {
                        if (policy->cur != new_policy.cur && has_target())
-                               cpufreq_out_of_sync(cpu, policy->cur,
-                                                               new_policy.cur);
+                               cpufreq_out_of_sync(policy, new_policy.cur);
                }
        }
 
@@ -2364,7 +2333,7 @@ static int cpufreq_boost_set_sw(int state)
        struct cpufreq_policy *policy;
        int ret = -EINVAL;
 
-       list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
+       for_each_policy(policy) {
                freq_table = cpufreq_frequency_get_table(policy->cpu);
                if (freq_table) {
                        ret = cpufreq_frequency_table_cpuinfo(policy,
@@ -2454,9 +2423,6 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
 
        pr_debug("trying to register driver %s\n", driver_data->name);
 
-       if (driver_data->setpolicy)
-               driver_data->flags |= CPUFREQ_CONST_LOOPS;
-
        write_lock_irqsave(&cpufreq_driver_lock, flags);
        if (cpufreq_driver) {
                write_unlock_irqrestore(&cpufreq_driver_lock, flags);
@@ -2465,6 +2431,9 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
        cpufreq_driver = driver_data;
        write_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
+       if (driver_data->setpolicy)
+               driver_data->flags |= CPUFREQ_CONST_LOOPS;
+
        if (cpufreq_boost_supported()) {
                /*
                 * Check if driver provides function to enable boost -
@@ -2485,23 +2454,12 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
        if (ret)
                goto err_boost_unreg;
 
-       if (!(cpufreq_driver->flags & CPUFREQ_STICKY)) {
-               int i;
-               ret = -ENODEV;
-
-               /* check for at least one working CPU */
-               for (i = 0; i < nr_cpu_ids; i++)
-                       if (cpu_possible(i) && per_cpu(cpufreq_cpu_data, i)) {
-                               ret = 0;
-                               break;
-                       }
-
+       if (!(cpufreq_driver->flags & CPUFREQ_STICKY) &&
+           list_empty(&cpufreq_policy_list)) {
                /* if all ->init() calls failed, unregister */
-               if (ret) {
-                       pr_debug("no CPU initialized for driver %s\n",
-                                driver_data->name);
-                       goto err_if_unreg;
-               }
+               pr_debug("%s: No CPU initialized for driver %s\n", __func__,
+                        driver_data->name);
+               goto err_if_unreg;
        }
 
        register_hotcpu_notifier(&cpufreq_cpu_notifier);
@@ -2556,6 +2514,14 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
 }
 EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
 
+/*
+ * Stop cpufreq at shutdown to make sure it isn't holding any locks
+ * or mutexes when secondary CPUs are halted.
+ */
+static struct syscore_ops cpufreq_syscore_ops = {
+       .shutdown = cpufreq_suspend,
+};
+
 static int __init cpufreq_core_init(void)
 {
        if (cpufreq_disabled())
@@ -2564,6 +2530,8 @@ static int __init cpufreq_core_init(void)
        cpufreq_global_kobject = kobject_create();
        BUG_ON(!cpufreq_global_kobject);
 
+       register_syscore_ops(&cpufreq_syscore_ops);
+
        return 0;
 }
 core_initcall(cpufreq_core_init);
index 0cd9b4dcef997d4814c5d2570092b46d26b4e143..5e370a30a964f5f30e9944daa9a768507c938af4 100644 (file)
@@ -18,7 +18,6 @@
 static spinlock_t cpufreq_stats_lock;
 
 struct cpufreq_stats {
-       unsigned int cpu;
        unsigned int total_trans;
        unsigned long long last_time;
        unsigned int max_state;
@@ -31,50 +30,33 @@ struct cpufreq_stats {
 #endif
 };
 
-static DEFINE_PER_CPU(struct cpufreq_stats *, cpufreq_stats_table);
-
-struct cpufreq_stats_attribute {
-       struct attribute attr;
-       ssize_t(*show) (struct cpufreq_stats *, char *);
-};
-
-static int cpufreq_stats_update(unsigned int cpu)
+static int cpufreq_stats_update(struct cpufreq_stats *stats)
 {
-       struct cpufreq_stats *stat;
-       unsigned long long cur_time;
+       unsigned long long cur_time = get_jiffies_64();
 
-       cur_time = get_jiffies_64();
        spin_lock(&cpufreq_stats_lock);
-       stat = per_cpu(cpufreq_stats_table, cpu);
-       if (stat->time_in_state)
-               stat->time_in_state[stat->last_index] +=
-                       cur_time - stat->last_time;
-       stat->last_time = cur_time;
+       stats->time_in_state[stats->last_index] += cur_time - stats->last_time;
+       stats->last_time = cur_time;
        spin_unlock(&cpufreq_stats_lock);
        return 0;
 }
 
 static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf)
 {
-       struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu);
-       if (!stat)
-               return 0;
-       return sprintf(buf, "%d\n",
-                       per_cpu(cpufreq_stats_table, stat->cpu)->total_trans);
+       return sprintf(buf, "%d\n", policy->stats->total_trans);
 }
 
 static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
 {
+       struct cpufreq_stats *stats = policy->stats;
        ssize_t len = 0;
        int i;
-       struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu);
-       if (!stat)
-               return 0;
-       cpufreq_stats_update(stat->cpu);
-       for (i = 0; i < stat->state_num; i++) {
-               len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i],
+
+       cpufreq_stats_update(stats);
+       for (i = 0; i < stats->state_num; i++) {
+               len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
                        (unsigned long long)
-                       jiffies_64_to_clock_t(stat->time_in_state[i]));
+                       jiffies_64_to_clock_t(stats->time_in_state[i]));
        }
        return len;
 }
@@ -82,38 +64,35 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
 static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
 {
+       struct cpufreq_stats *stats = policy->stats;
        ssize_t len = 0;
        int i, j;
 
-       struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu);
-       if (!stat)
-               return 0;
-       cpufreq_stats_update(stat->cpu);
        len += snprintf(buf + len, PAGE_SIZE - len, "   From  :    To\n");
        len += snprintf(buf + len, PAGE_SIZE - len, "         : ");
-       for (i = 0; i < stat->state_num; i++) {
+       for (i = 0; i < stats->state_num; i++) {
                if (len >= PAGE_SIZE)
                        break;
                len += snprintf(buf + len, PAGE_SIZE - len, "%9u ",
-                               stat->freq_table[i]);
+                               stats->freq_table[i]);
        }
        if (len >= PAGE_SIZE)
                return PAGE_SIZE;
 
        len += snprintf(buf + len, PAGE_SIZE - len, "\n");
 
-       for (i = 0; i < stat->state_num; i++) {
+       for (i = 0; i < stats->state_num; i++) {
                if (len >= PAGE_SIZE)
                        break;
 
                len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ",
-                               stat->freq_table[i]);
+                               stats->freq_table[i]);
 
-               for (j = 0; j < stat->state_num; j++) {
+               for (j = 0; j < stats->state_num; j++) {
                        if (len >= PAGE_SIZE)
                                break;
                        len += snprintf(buf + len, PAGE_SIZE - len, "%9u ",
-                                       stat->trans_table[i*stat->max_state+j]);
+                                       stats->trans_table[i*stats->max_state+j]);
                }
                if (len >= PAGE_SIZE)
                        break;
@@ -142,28 +121,29 @@ static struct attribute_group stats_attr_group = {
        .name = "stats"
 };
 
-static int freq_table_get_index(struct cpufreq_stats *stat, unsigned int freq)
+static int freq_table_get_index(struct cpufreq_stats *stats, unsigned int freq)
 {
        int index;
-       for (index = 0; index < stat->max_state; index++)
-               if (stat->freq_table[index] == freq)
+       for (index = 0; index < stats->max_state; index++)
+               if (stats->freq_table[index] == freq)
                        return index;
        return -1;
 }
 
 static void __cpufreq_stats_free_table(struct cpufreq_policy *policy)
 {
-       struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu);
+       struct cpufreq_stats *stats = policy->stats;
 
-       if (!stat)
+       /* Already freed */
+       if (!stats)
                return;
 
-       pr_debug("%s: Free stat table\n", __func__);
+       pr_debug("%s: Free stats table\n", __func__);
 
        sysfs_remove_group(&policy->kobj, &stats_attr_group);
-       kfree(stat->time_in_state);
-       kfree(stat);
-       per_cpu(cpufreq_stats_table, policy->cpu) = NULL;
+       kfree(stats->time_in_state);
+       kfree(stats);
+       policy->stats = NULL;
 }
 
 static void cpufreq_stats_free_table(unsigned int cpu)
@@ -174,37 +154,33 @@ static void cpufreq_stats_free_table(unsigned int cpu)
        if (!policy)
                return;
 
-       if (cpufreq_frequency_get_table(policy->cpu))
-               __cpufreq_stats_free_table(policy);
+       __cpufreq_stats_free_table(policy);
 
        cpufreq_cpu_put(policy);
 }
 
 static int __cpufreq_stats_create_table(struct cpufreq_policy *policy)
 {
-       unsigned int i, count = 0, ret = 0;
-       struct cpufreq_stats *stat;
+       unsigned int i = 0, count = 0, ret = -ENOMEM;
+       struct cpufreq_stats *stats;
        unsigned int alloc_size;
        unsigned int cpu = policy->cpu;
        struct cpufreq_frequency_table *pos, *table;
 
+       /* We need cpufreq table for creating stats table */
        table = cpufreq_frequency_get_table(cpu);
        if (unlikely(!table))
                return 0;
 
-       if (per_cpu(cpufreq_stats_table, cpu))
-               return -EBUSY;
-       stat = kzalloc(sizeof(*stat), GFP_KERNEL);
-       if ((stat) == NULL)
-               return -ENOMEM;
-
-       ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
-       if (ret)
-               goto error_out;
+       /* stats already initialized */
+       if (policy->stats)
+               return -EEXIST;
 
-       stat->cpu = cpu;
-       per_cpu(cpufreq_stats_table, cpu) = stat;
+       stats = kzalloc(sizeof(*stats), GFP_KERNEL);
+       if (!stats)
+               return -ENOMEM;
 
+       /* Find total allocation size */
        cpufreq_for_each_valid_entry(pos, table)
                count++;
 
@@ -213,32 +189,40 @@ static int __cpufreq_stats_create_table(struct cpufreq_policy *policy)
 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
        alloc_size += count * count * sizeof(int);
 #endif
-       stat->max_state = count;
-       stat->time_in_state = kzalloc(alloc_size, GFP_KERNEL);
-       if (!stat->time_in_state) {
-               ret = -ENOMEM;
-               goto error_alloc;
-       }
-       stat->freq_table = (unsigned int *)(stat->time_in_state + count);
+
+       /* Allocate memory for time_in_state/freq_table/trans_table in one go */
+       stats->time_in_state = kzalloc(alloc_size, GFP_KERNEL);
+       if (!stats->time_in_state)
+               goto free_stat;
+
+       stats->freq_table = (unsigned int *)(stats->time_in_state + count);
 
 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
-       stat->trans_table = stat->freq_table + count;
+       stats->trans_table = stats->freq_table + count;
 #endif
-       i = 0;
+
+       stats->max_state = count;
+
+       /* Find valid-unique entries */
        cpufreq_for_each_valid_entry(pos, table)
-               if (freq_table_get_index(stat, pos->frequency) == -1)
-                       stat->freq_table[i++] = pos->frequency;
-       stat->state_num = i;
-       spin_lock(&cpufreq_stats_lock);
-       stat->last_time = get_jiffies_64();
-       stat->last_index = freq_table_get_index(stat, policy->cur);
-       spin_unlock(&cpufreq_stats_lock);
-       return 0;
-error_alloc:
-       sysfs_remove_group(&policy->kobj, &stats_attr_group);
-error_out:
-       kfree(stat);
-       per_cpu(cpufreq_stats_table, cpu) = NULL;
+               if (freq_table_get_index(stats, pos->frequency) == -1)
+                       stats->freq_table[i++] = pos->frequency;
+
+       stats->state_num = i;
+       stats->last_time = get_jiffies_64();
+       stats->last_index = freq_table_get_index(stats, policy->cur);
+
+       policy->stats = stats;
+       ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
+       if (!ret)
+               return 0;
+
+       /* We failed, release resources */
+       policy->stats = NULL;
+       kfree(stats->time_in_state);
+free_stat:
+       kfree(stats);
+
        return ret;
 }
 
@@ -259,30 +243,12 @@ static void cpufreq_stats_create_table(unsigned int cpu)
        cpufreq_cpu_put(policy);
 }
 
-static void cpufreq_stats_update_policy_cpu(struct cpufreq_policy *policy)
-{
-       struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table,
-                       policy->last_cpu);
-
-       pr_debug("Updating stats_table for new_cpu %u from last_cpu %u\n",
-                       policy->cpu, policy->last_cpu);
-       per_cpu(cpufreq_stats_table, policy->cpu) = per_cpu(cpufreq_stats_table,
-                       policy->last_cpu);
-       per_cpu(cpufreq_stats_table, policy->last_cpu) = NULL;
-       stat->cpu = policy->cpu;
-}
-
 static int cpufreq_stat_notifier_policy(struct notifier_block *nb,
                unsigned long val, void *data)
 {
        int ret = 0;
        struct cpufreq_policy *policy = data;
 
-       if (val == CPUFREQ_UPDATE_POLICY_CPU) {
-               cpufreq_stats_update_policy_cpu(policy);
-               return 0;
-       }
-
        if (val == CPUFREQ_CREATE_POLICY)
                ret = __cpufreq_stats_create_table(policy);
        else if (val == CPUFREQ_REMOVE_POLICY)
@@ -295,35 +261,45 @@ static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
                unsigned long val, void *data)
 {
        struct cpufreq_freqs *freq = data;
-       struct cpufreq_stats *stat;
+       struct cpufreq_policy *policy = cpufreq_cpu_get(freq->cpu);
+       struct cpufreq_stats *stats;
        int old_index, new_index;
 
-       if (val != CPUFREQ_POSTCHANGE)
+       if (!policy) {
+               pr_err("%s: No policy found\n", __func__);
                return 0;
+       }
 
-       stat = per_cpu(cpufreq_stats_table, freq->cpu);
-       if (!stat)
-               return 0;
+       if (val != CPUFREQ_POSTCHANGE)
+               goto put_policy;
 
-       old_index = stat->last_index;
-       new_index = freq_table_get_index(stat, freq->new);
+       if (!policy->stats) {
+               pr_debug("%s: No stats found\n", __func__);
+               goto put_policy;
+       }
 
-       /* We can't do stat->time_in_state[-1]= .. */
-       if (old_index == -1 || new_index == -1)
-               return 0;
+       stats = policy->stats;
+
+       old_index = stats->last_index;
+       new_index = freq_table_get_index(stats, freq->new);
 
-       cpufreq_stats_update(freq->cpu);
+       /* We can't do stats->time_in_state[-1]= .. */
+       if (old_index == -1 || new_index == -1)
+               goto put_policy;
 
        if (old_index == new_index)
-               return 0;
+               goto put_policy;
 
-       spin_lock(&cpufreq_stats_lock);
-       stat->last_index = new_index;
+       cpufreq_stats_update(stats);
+
+       stats->last_index = new_index;
 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
-       stat->trans_table[old_index * stat->max_state + new_index]++;
+       stats->trans_table[old_index * stats->max_state + new_index]++;
 #endif
-       stat->total_trans++;
-       spin_unlock(&cpufreq_stats_lock);
+       stats->total_trans++;
+
+put_policy:
+       cpufreq_cpu_put(policy);
        return 0;
 }
 
@@ -374,8 +350,7 @@ static void __exit cpufreq_stats_exit(void)
 }
 
 MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>");
-MODULE_DESCRIPTION("'cpufreq_stats' - A driver to export cpufreq stats "
-                               "through sysfs filesystem");
+MODULE_DESCRIPTION("Export cpufreq stats via sysfs");
 MODULE_LICENSE("GPL");
 
 module_init(cpufreq_stats_init);
index 742eefba12c2101d8bc8e035958d7bb59596ef93..872c5772c5d358ab8fafcf723579cbf2b020dc47 100644 (file)
@@ -148,6 +148,8 @@ struct perf_limits {
        int32_t min_perf;
        int max_policy_pct;
        int max_sysfs_pct;
+       int min_policy_pct;
+       int min_sysfs_pct;
 };
 
 static struct perf_limits limits = {
@@ -159,6 +161,8 @@ static struct perf_limits limits = {
        .min_perf = 0,
        .max_policy_pct = 100,
        .max_sysfs_pct = 100,
+       .min_policy_pct = 0,
+       .min_sysfs_pct = 0,
 };
 
 static inline void pid_reset(struct _pid *pid, int setpoint, int busy,
@@ -338,6 +342,33 @@ static void __init intel_pstate_debug_expose_params(void)
                return sprintf(buf, "%u\n", limits.object);             \
        }
 
+static ssize_t show_turbo_pct(struct kobject *kobj,
+                               struct attribute *attr, char *buf)
+{
+       struct cpudata *cpu;
+       int total, no_turbo, turbo_pct;
+       uint32_t turbo_fp;
+
+       cpu = all_cpu_data[0];
+
+       total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1;
+       no_turbo = cpu->pstate.max_pstate - cpu->pstate.min_pstate + 1;
+       turbo_fp = div_fp(int_tofp(no_turbo), int_tofp(total));
+       turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100)));
+       return sprintf(buf, "%u\n", turbo_pct);
+}
+
+static ssize_t show_num_pstates(struct kobject *kobj,
+                               struct attribute *attr, char *buf)
+{
+       struct cpudata *cpu;
+       int total;
+
+       cpu = all_cpu_data[0];
+       total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1;
+       return sprintf(buf, "%u\n", total);
+}
+
 static ssize_t show_no_turbo(struct kobject *kobj,
                             struct attribute *attr, char *buf)
 {
@@ -404,7 +435,9 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
        ret = sscanf(buf, "%u", &input);
        if (ret != 1)
                return -EINVAL;
-       limits.min_perf_pct = clamp_t(int, input, 0 , 100);
+
+       limits.min_sysfs_pct = clamp_t(int, input, 0 , 100);
+       limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct);
        limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100));
 
        if (hwp_active)
@@ -418,11 +451,15 @@ show_one(min_perf_pct, min_perf_pct);
 define_one_global_rw(no_turbo);
 define_one_global_rw(max_perf_pct);
 define_one_global_rw(min_perf_pct);
+define_one_global_ro(turbo_pct);
+define_one_global_ro(num_pstates);
 
 static struct attribute *intel_pstate_attributes[] = {
        &no_turbo.attr,
        &max_perf_pct.attr,
        &min_perf_pct.attr,
+       &turbo_pct.attr,
+       &num_pstates.attr,
        NULL
 };
 
@@ -825,6 +862,7 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
        ICPU(0x46, core_params),
        ICPU(0x47, core_params),
        ICPU(0x4c, byt_params),
+       ICPU(0x4e, core_params),
        ICPU(0x4f, core_params),
        ICPU(0x56, core_params),
        {}
@@ -887,7 +925,9 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
        if (!policy->cpuinfo.max_freq)
                return -ENODEV;
 
-       if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
+       if (policy->policy == CPUFREQ_POLICY_PERFORMANCE &&
+           policy->max >= policy->cpuinfo.max_freq) {
+               limits.min_policy_pct = 100;
                limits.min_perf_pct = 100;
                limits.min_perf = int_tofp(1);
                limits.max_policy_pct = 100;
@@ -897,8 +937,9 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
                return 0;
        }
 
-       limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq;
-       limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100);
+       limits.min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq;
+       limits.min_policy_pct = clamp_t(int, limits.min_policy_pct, 0 , 100);
+       limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct);
        limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100));
 
        limits.max_policy_pct = (policy->max * 100) / policy->cpuinfo.max_freq;
@@ -978,6 +1019,7 @@ static struct cpufreq_driver intel_pstate_driver = {
 
 static int __initdata no_load;
 static int __initdata no_hwp;
+static int __initdata hwp_only;
 static unsigned int force_load;
 
 static int intel_pstate_msrs_not_valid(void)
@@ -1175,6 +1217,9 @@ static int __init intel_pstate_init(void)
        if (cpu_has(c,X86_FEATURE_HWP) && !no_hwp)
                intel_pstate_hwp_enable();
 
+       if (!hwp_active && hwp_only)
+               goto out;
+
        rc = cpufreq_register_driver(&intel_pstate_driver);
        if (rc)
                goto out;
@@ -1209,6 +1254,8 @@ static int __init intel_pstate_setup(char *str)
                no_hwp = 1;
        if (!strcmp(str, "force"))
                force_load = 1;
+       if (!strcmp(str, "hwp_only"))
+               hwp_only = 1;
        return 0;
 }
 early_param("intel_pstate", intel_pstate_setup);
index 25fbd6a1374fc7db998d1238823c20bc3b5c7280..f0913eee2f5007a2ba8ca5ee7b1adea21bfc8215 100644 (file)
@@ -210,7 +210,6 @@ out:
 static struct platform_driver ls1x_cpufreq_platdrv = {
        .driver = {
                .name   = "ls1x-cpufreq",
-               .owner  = THIS_MODULE,
        },
        .probe          = ls1x_cpufreq_probe,
        .remove         = ls1x_cpufreq_remove,
diff --git a/drivers/cpufreq/sfi-cpufreq.c b/drivers/cpufreq/sfi-cpufreq.c
new file mode 100644 (file)
index 0000000..ffa3389
--- /dev/null
@@ -0,0 +1,136 @@
+/*
+ *  SFI Performance States Driver
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Author: Vishwesh M Rudramuni <vishwesh.m.rudramuni@intel.com>
+ *  Author: Srinidhi Kasagar <srinidhi.kasagar@intel.com>
+ */
+
+#include <linux/cpufreq.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sfi.h>
+#include <linux/slab.h>
+#include <linux/smp.h>
+
+#include <asm/msr.h>
+
+struct cpufreq_frequency_table *freq_table;
+static struct sfi_freq_table_entry *sfi_cpufreq_array;
+static int num_freq_table_entries;
+
+static int sfi_parse_freq(struct sfi_table_header *table)
+{
+       struct sfi_table_simple *sb;
+       struct sfi_freq_table_entry *pentry;
+       int totallen;
+
+       sb = (struct sfi_table_simple *)table;
+       num_freq_table_entries = SFI_GET_NUM_ENTRIES(sb,
+                       struct sfi_freq_table_entry);
+       if (num_freq_table_entries <= 1) {
+               pr_err("No p-states discovered\n");
+               return -ENODEV;
+       }
+
+       pentry = (struct sfi_freq_table_entry *)sb->pentry;
+       totallen = num_freq_table_entries * sizeof(*pentry);
+
+       sfi_cpufreq_array = kzalloc(totallen, GFP_KERNEL);
+       if (!sfi_cpufreq_array)
+               return -ENOMEM;
+
+       memcpy(sfi_cpufreq_array, pentry, totallen);
+
+       return 0;
+}
+
+static int sfi_cpufreq_target(struct cpufreq_policy *policy, unsigned int index)
+{
+       unsigned int next_perf_state = 0; /* Index into perf table */
+       u32 lo, hi;
+
+       next_perf_state = policy->freq_table[index].driver_data;
+
+       rdmsr_on_cpu(policy->cpu, MSR_IA32_PERF_CTL, &lo, &hi);
+       lo = (lo & ~INTEL_PERF_CTL_MASK) |
+               ((u32) sfi_cpufreq_array[next_perf_state].ctrl_val &
+               INTEL_PERF_CTL_MASK);
+       wrmsr_on_cpu(policy->cpu, MSR_IA32_PERF_CTL, lo, hi);
+
+       return 0;
+}
+
+static int sfi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+       policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
+       policy->cpuinfo.transition_latency = 100000;    /* 100us */
+
+       return cpufreq_table_validate_and_show(policy, freq_table);
+}
+
+static struct cpufreq_driver sfi_cpufreq_driver = {
+       .flags          = CPUFREQ_CONST_LOOPS,
+       .verify         = cpufreq_generic_frequency_table_verify,
+       .target_index   = sfi_cpufreq_target,
+       .init           = sfi_cpufreq_cpu_init,
+       .name           = "sfi-cpufreq",
+       .attr           = cpufreq_generic_attr,
+};
+
+static int __init sfi_cpufreq_init(void)
+{
+       int ret, i;
+
+       /* parse the freq table from SFI */
+       ret = sfi_table_parse(SFI_SIG_FREQ, NULL, NULL, sfi_parse_freq);
+       if (ret)
+               return ret;
+
+       freq_table = kzalloc(sizeof(*freq_table) *
+                       (num_freq_table_entries + 1), GFP_KERNEL);
+       if (!freq_table) {
+               ret = -ENOMEM;
+               goto err_free_array;
+       }
+
+       for (i = 0; i < num_freq_table_entries; i++) {
+               freq_table[i].driver_data = i;
+               freq_table[i].frequency = sfi_cpufreq_array[i].freq_mhz * 1000;
+       }
+       freq_table[i].frequency = CPUFREQ_TABLE_END;
+
+       ret = cpufreq_register_driver(&sfi_cpufreq_driver);
+       if (ret)
+               goto err_free_tbl;
+
+       return ret;
+
+err_free_tbl:
+       kfree(freq_table);
+err_free_array:
+       kfree(sfi_cpufreq_array);
+       return ret;
+}
+late_initcall(sfi_cpufreq_init);
+
+static void __exit sfi_cpufreq_exit(void)
+{
+       cpufreq_unregister_driver(&sfi_cpufreq_driver);
+       kfree(freq_table);
+       kfree(sfi_cpufreq_array);
+}
+module_exit(sfi_cpufreq_exit);
+
+MODULE_AUTHOR("Vishwesh M Rudramuni <vishwesh.m.rudramuni@intel.com>");
+MODULE_DESCRIPTION("SFI Performance-States Driver");
+MODULE_LICENSE("GPL");
index e3e225fe6b45ed33418b0bd1357eb257e487ba70..40c34faffe594951c7e6aea92f730fbfd201c71f 100644 (file)
@@ -182,6 +182,10 @@ static int __init bl_idle_init(void)
         */
        if (!of_match_node(compatible_machine_match, root))
                return -ENODEV;
+
+       if (!mcpm_is_available())
+               return -EUNATCH;
+
        /*
         * For now the differentiation between little and big cores
         * is based on the part number. A7 cores are considered little
index 3891f6781298c39aee61e68010bf6f42396def1f..64281bb2f6503c5a63d605ca7249ca86434d5aaf 100644 (file)
@@ -88,4 +88,16 @@ config ARM_EXYNOS5_BUS_DEVFREQ
          It reads PPMU counters of memory controllers and adjusts the
          operating frequencies and voltages with OPP support.
 
+config ARM_TEGRA_DEVFREQ
+       tristate "Tegra DEVFREQ Driver"
+       depends on ARCH_TEGRA_124_SOC
+       select DEVFREQ_GOV_SIMPLE_ONDEMAND
+       select PM_OPP
+       help
+         This adds the DEVFREQ driver for the Tegra family of SoCs.
+         It reads ACTMON counters of memory controllers and adjusts the
+         operating frequencies and voltages with OPP support.
+
+source "drivers/devfreq/event/Kconfig"
+
 endif # PM_DEVFREQ
index 16138c9e0d587595f24d9d61535714e1b788d5cd..5134f9ee983d2308105f50065e4165aa15e01059 100644 (file)
@@ -1,4 +1,5 @@
 obj-$(CONFIG_PM_DEVFREQ)       += devfreq.o
+obj-$(CONFIG_PM_DEVFREQ_EVENT) += devfreq-event.o
 obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)      += governor_simpleondemand.o
 obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE)  += governor_performance.o
 obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE)    += governor_powersave.o
@@ -7,3 +8,7 @@ obj-$(CONFIG_DEVFREQ_GOV_USERSPACE)     += governor_userspace.o
 # DEVFREQ Drivers
 obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ)  += exynos/
 obj-$(CONFIG_ARM_EXYNOS5_BUS_DEVFREQ)  += exynos/
+obj-$(CONFIG_ARM_TEGRA_DEVFREQ)                += tegra-devfreq.o
+
+# DEVFREQ Event Drivers
+obj-$(CONFIG_PM_DEVFREQ_EVENT)         += event/
diff --git a/drivers/devfreq/devfreq-event.c b/drivers/devfreq/devfreq-event.c
new file mode 100644 (file)
index 0000000..f304a02
--- /dev/null
@@ -0,0 +1,494 @@
+/*
+ * devfreq-event: a framework to provide raw data and events of devfreq devices
+ *
+ * Copyright (C) 2015 Samsung Electronics
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This driver is based on drivers/devfreq/devfreq.c.
+ */
+
+#include <linux/devfreq-event.h>
+#include <linux/kernel.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/of.h>
+
+static struct class *devfreq_event_class;
+
+/* The list of all devfreq event list */
+static LIST_HEAD(devfreq_event_list);
+static DEFINE_MUTEX(devfreq_event_list_lock);
+
+#define to_devfreq_event(DEV) container_of(DEV, struct devfreq_event_dev, dev)
+
+/**
+ * devfreq_event_enable_edev() - Enable the devfreq-event dev and increase
+ *                              the enable_count of devfreq-event dev.
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function increase the enable_count and enable the
+ * devfreq-event device. The devfreq-event device should be enabled before
+ * using it by devfreq device.
+ */
+int devfreq_event_enable_edev(struct devfreq_event_dev *edev)
+{
+       int ret = 0;
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       mutex_lock(&edev->lock);
+       if (edev->desc->ops && edev->desc->ops->enable
+                       && edev->enable_count == 0) {
+               ret = edev->desc->ops->enable(edev);
+               if (ret < 0)
+                       goto err;
+       }
+       edev->enable_count++;
+err:
+       mutex_unlock(&edev->lock);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_enable_edev);
+
+/**
+ * devfreq_event_disable_edev() - Disable the devfreq-event dev and decrease
+ *                               the enable_count of the devfreq-event dev.
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function decrease the enable_count and disable the
+ * devfreq-event device. After the devfreq-event device is disabled,
+ * devfreq device can't use the devfreq-event device for get/set/reset
+ * operations.
+ */
+int devfreq_event_disable_edev(struct devfreq_event_dev *edev)
+{
+       int ret = 0;
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       mutex_lock(&edev->lock);
+       if (edev->enable_count <= 0) {
+               dev_warn(&edev->dev, "unbalanced enable_count\n");
+               ret = -EIO;
+               goto err;
+       }
+
+       if (edev->desc->ops && edev->desc->ops->disable
+                       && edev->enable_count == 1) {
+               ret = edev->desc->ops->disable(edev);
+               if (ret < 0)
+                       goto err;
+       }
+       edev->enable_count--;
+err:
+       mutex_unlock(&edev->lock);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_disable_edev);
+
+/**
+ * devfreq_event_is_enabled() - Check whether devfreq-event dev is enabled or
+ *                             not.
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function check whether devfreq-event dev is enabled or not.
+ * If return true, the devfreq-event dev is enabeld. If return false, the
+ * devfreq-event dev is disabled.
+ */
+bool devfreq_event_is_enabled(struct devfreq_event_dev *edev)
+{
+       bool enabled = false;
+
+       if (!edev || !edev->desc)
+               return enabled;
+
+       mutex_lock(&edev->lock);
+
+       if (edev->enable_count > 0)
+               enabled = true;
+
+       mutex_unlock(&edev->lock);
+
+       return enabled;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_is_enabled);
+
+/**
+ * devfreq_event_set_event() - Set event to devfreq-event dev to start.
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function set the event to the devfreq-event device to start
+ * for getting the event data which could be various event type.
+ */
+int devfreq_event_set_event(struct devfreq_event_dev *edev)
+{
+       int ret;
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       if (!edev->desc->ops || !edev->desc->ops->set_event)
+               return -EINVAL;
+
+       if (!devfreq_event_is_enabled(edev))
+               return -EPERM;
+
+       mutex_lock(&edev->lock);
+       ret = edev->desc->ops->set_event(edev);
+       mutex_unlock(&edev->lock);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_set_event);
+
+/**
+ * devfreq_event_get_event() - Get {load|total}_count from devfreq-event dev.
+ * @edev       : the devfreq-event device
+ * @edata      : the calculated data of devfreq-event device
+ *
+ * Note that this function get the calculated event data from devfreq-event dev
+ * after stoping the progress of whole sequence of devfreq-event dev.
+ */
+int devfreq_event_get_event(struct devfreq_event_dev *edev,
+                           struct devfreq_event_data *edata)
+{
+       int ret;
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       if (!edev->desc->ops || !edev->desc->ops->get_event)
+               return -EINVAL;
+
+       if (!devfreq_event_is_enabled(edev))
+               return -EINVAL;
+
+       edata->total_count = edata->load_count = 0;
+
+       mutex_lock(&edev->lock);
+       ret = edev->desc->ops->get_event(edev, edata);
+       if (ret < 0)
+               edata->total_count = edata->load_count = 0;
+       mutex_unlock(&edev->lock);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_get_event);
+
+/**
+ * devfreq_event_reset_event() - Reset all opeations of devfreq-event dev.
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function stop all operations of devfreq-event dev and reset
+ * the current event data to make the devfreq-event device into initial state.
+ */
+int devfreq_event_reset_event(struct devfreq_event_dev *edev)
+{
+       int ret = 0;
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       if (!devfreq_event_is_enabled(edev))
+               return -EPERM;
+
+       mutex_lock(&edev->lock);
+       if (edev->desc->ops && edev->desc->ops->reset)
+               ret = edev->desc->ops->reset(edev);
+       mutex_unlock(&edev->lock);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_reset_event);
+
+/**
+ * devfreq_event_get_edev_by_phandle() - Get the devfreq-event dev from
+ *                                      devicetree.
+ * @dev                : the pointer to the given device
+ * @index      : the index into list of devfreq-event device
+ *
+ * Note that this function return the pointer of devfreq-event device.
+ */
+struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(struct device *dev,
+                                                     int index)
+{
+       struct device_node *node;
+       struct devfreq_event_dev *edev;
+
+       if (!dev->of_node) {
+               dev_err(dev, "device does not have a device node entry\n");
+               return ERR_PTR(-EINVAL);
+       }
+
+       node = of_parse_phandle(dev->of_node, "devfreq-events", index);
+       if (!node) {
+               dev_err(dev, "failed to get phandle in %s node\n",
+                       dev->of_node->full_name);
+               return ERR_PTR(-ENODEV);
+       }
+
+       mutex_lock(&devfreq_event_list_lock);
+       list_for_each_entry(edev, &devfreq_event_list, node) {
+               if (!strcmp(edev->desc->name, node->name))
+                       goto out;
+       }
+       edev = NULL;
+out:
+       mutex_unlock(&devfreq_event_list_lock);
+
+       if (!edev) {
+               dev_err(dev, "unable to get devfreq-event device : %s\n",
+                       node->name);
+               of_node_put(node);
+               return ERR_PTR(-ENODEV);
+       }
+
+       of_node_put(node);
+
+       return edev;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_get_edev_by_phandle);
+
+/**
+ * devfreq_event_get_edev_count() - Get the count of devfreq-event dev
+ * @dev                : the pointer to the given device
+ *
+ * Note that this function return the count of devfreq-event devices.
+ */
+int devfreq_event_get_edev_count(struct device *dev)
+{
+       int count;
+
+       if (!dev->of_node) {
+               dev_err(dev, "device does not have a device node entry\n");
+               return -EINVAL;
+       }
+
+       count = of_property_count_elems_of_size(dev->of_node, "devfreq-events",
+                                               sizeof(u32));
+       if (count < 0 ) {
+               dev_err(dev,
+                       "failed to get the count of devfreq-event in %s node\n",
+                       dev->of_node->full_name);
+               return count;
+       }
+
+       return count;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_get_edev_count);
+
+static void devfreq_event_release_edev(struct device *dev)
+{
+       struct devfreq_event_dev *edev = to_devfreq_event(dev);
+
+       kfree(edev);
+}
+
+/**
+ * devfreq_event_add_edev() - Add new devfreq-event device.
+ * @dev                : the device owning the devfreq-event device being created
+ * @desc       : the devfreq-event device's decriptor which include essential
+ *               data for devfreq-event device.
+ *
+ * Note that this function add new devfreq-event device to devfreq-event class
+ * list and register the device of the devfreq-event device.
+ */
+struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
+                                               struct devfreq_event_desc *desc)
+{
+       struct devfreq_event_dev *edev;
+       static atomic_t event_no = ATOMIC_INIT(0);
+       int ret;
+
+       if (!dev || !desc)
+               return ERR_PTR(-EINVAL);
+
+       if (!desc->name || !desc->ops)
+               return ERR_PTR(-EINVAL);
+
+       if (!desc->ops->set_event || !desc->ops->get_event)
+               return ERR_PTR(-EINVAL);
+
+       edev = kzalloc(sizeof(struct devfreq_event_dev), GFP_KERNEL);
+       if (!edev)
+               return ERR_PTR(-ENOMEM);
+
+       mutex_init(&edev->lock);
+       edev->desc = desc;
+       edev->enable_count = 0;
+       edev->dev.parent = dev;
+       edev->dev.class = devfreq_event_class;
+       edev->dev.release = devfreq_event_release_edev;
+
+       dev_set_name(&edev->dev, "event.%d", atomic_inc_return(&event_no) - 1);
+       ret = device_register(&edev->dev);
+       if (ret < 0) {
+               put_device(&edev->dev);
+               return ERR_PTR(ret);
+       }
+       dev_set_drvdata(&edev->dev, edev);
+
+       INIT_LIST_HEAD(&edev->node);
+
+       mutex_lock(&devfreq_event_list_lock);
+       list_add(&edev->node, &devfreq_event_list);
+       mutex_unlock(&devfreq_event_list_lock);
+
+       return edev;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_add_edev);
+
+/**
+ * devfreq_event_remove_edev() - Remove the devfreq-event device registered.
+ * @dev                : the devfreq-event device
+ *
+ * Note that this function remove the registered devfreq-event device.
+ */
+int devfreq_event_remove_edev(struct devfreq_event_dev *edev)
+{
+       if (!edev)
+               return -EINVAL;
+
+       WARN_ON(edev->enable_count);
+
+       mutex_lock(&devfreq_event_list_lock);
+       list_del(&edev->node);
+       mutex_unlock(&devfreq_event_list_lock);
+
+       device_unregister(&edev->dev);
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(devfreq_event_remove_edev);
+
+static int devm_devfreq_event_match(struct device *dev, void *res, void *data)
+{
+       struct devfreq_event_dev **r = res;
+
+       if (WARN_ON(!r || !*r))
+               return 0;
+
+       return *r == data;
+}
+
+static void devm_devfreq_event_release(struct device *dev, void *res)
+{
+       devfreq_event_remove_edev(*(struct devfreq_event_dev **)res);
+}
+
+/**
+ * devm_devfreq_event_add_edev() - Resource-managed devfreq_event_add_edev()
+ * @dev                : the device owning the devfreq-event device being created
+ * @desc       : the devfreq-event device's decriptor which include essential
+ *               data for devfreq-event device.
+ *
+ * Note that this function manages automatically the memory of devfreq-event
+ * device using device resource management and simplify the free operation
+ * for memory of devfreq-event device.
+ */
+struct devfreq_event_dev *devm_devfreq_event_add_edev(struct device *dev,
+                                               struct devfreq_event_desc *desc)
+{
+       struct devfreq_event_dev **ptr, *edev;
+
+       ptr = devres_alloc(devm_devfreq_event_release, sizeof(*ptr), GFP_KERNEL);
+       if (!ptr)
+               return ERR_PTR(-ENOMEM);
+
+       edev = devfreq_event_add_edev(dev, desc);
+       if (IS_ERR(edev)) {
+               devres_free(ptr);
+               return ERR_PTR(-ENOMEM);
+       }
+
+       *ptr = edev;
+       devres_add(dev, ptr);
+
+       return edev;
+}
+EXPORT_SYMBOL_GPL(devm_devfreq_event_add_edev);
+
+/**
+ * devm_devfreq_event_remove_edev()- Resource-managed devfreq_event_remove_edev()
+ * @dev                : the device owning the devfreq-event device being created
+ * @edev       : the devfreq-event device
+ *
+ * Note that this function manages automatically the memory of devfreq-event
+ * device using device resource management.
+ */
+void devm_devfreq_event_remove_edev(struct device *dev,
+                               struct devfreq_event_dev *edev)
+{
+       WARN_ON(devres_release(dev, devm_devfreq_event_release,
+                              devm_devfreq_event_match, edev));
+}
+EXPORT_SYMBOL_GPL(devm_devfreq_event_remove_edev);
+
+/*
+ * Device attributes for devfreq-event class.
+ */
+static ssize_t name_show(struct device *dev, struct device_attribute *attr,
+                        char *buf)
+{
+       struct devfreq_event_dev *edev = to_devfreq_event(dev);
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       return sprintf(buf, "%s\n", edev->desc->name);
+}
+static DEVICE_ATTR_RO(name);
+
+static ssize_t enable_count_show(struct device *dev,
+                                 struct device_attribute *attr, char *buf)
+{
+       struct devfreq_event_dev *edev = to_devfreq_event(dev);
+
+       if (!edev || !edev->desc)
+               return -EINVAL;
+
+       return sprintf(buf, "%d\n", edev->enable_count);
+}
+static DEVICE_ATTR_RO(enable_count);
+
+static struct attribute *devfreq_event_attrs[] = {
+       &dev_attr_name.attr,
+       &dev_attr_enable_count.attr,
+       NULL,
+};
+ATTRIBUTE_GROUPS(devfreq_event);
+
+static int __init devfreq_event_init(void)
+{
+       devfreq_event_class = class_create(THIS_MODULE, "devfreq-event");
+       if (IS_ERR(devfreq_event_class)) {
+               pr_err("%s: couldn't create class\n", __FILE__);
+               return PTR_ERR(devfreq_event_class);
+       }
+
+       devfreq_event_class->dev_groups = devfreq_event_groups;
+
+       return 0;
+}
+subsys_initcall(devfreq_event_init);
+
+static void __exit devfreq_event_exit(void)
+{
+       class_destroy(devfreq_event_class);
+}
+module_exit(devfreq_event_exit);
+
+MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+MODULE_DESCRIPTION("DEVFREQ-Event class support");
+MODULE_LICENSE("GPL");
diff --git a/drivers/devfreq/event/Kconfig b/drivers/devfreq/event/Kconfig
new file mode 100644 (file)
index 0000000..a11720a
--- /dev/null
@@ -0,0 +1,25 @@
+menuconfig PM_DEVFREQ_EVENT
+       bool "DEVFREQ-Event device Support"
+       help
+         The devfreq-event device provide the raw data and events which
+         indicate the current state of devfreq-event device. The provided
+         data from devfreq-event device is used to monitor the state of
+         device and determine the suitable size of resource to reduce the
+         wasted resource.
+
+         The devfreq-event device can support the various type of events
+         (e.g., raw data, utilization, latency, bandwidth). The events
+         may be used by devfreq governor and other subsystem.
+
+if PM_DEVFREQ_EVENT
+
+config DEVFREQ_EVENT_EXYNOS_PPMU
+       bool "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver"
+       depends on ARCH_EXYNOS
+       select PM_OPP
+       help
+         This add the devfreq-event driver for Exynos SoC. It provides PPMU
+         (Platform Performance Monitoring Unit) counters to estimate the
+         utilization of each module.
+
+endif # PM_DEVFREQ_EVENT
diff --git a/drivers/devfreq/event/Makefile b/drivers/devfreq/event/Makefile
new file mode 100644 (file)
index 0000000..be146ea
--- /dev/null
@@ -0,0 +1,2 @@
+# Exynos DEVFREQ Event Drivers
+obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_PPMU) += exynos-ppmu.o
diff --git a/drivers/devfreq/event/exynos-ppmu.c b/drivers/devfreq/event/exynos-ppmu.c
new file mode 100644 (file)
index 0000000..135be0a
--- /dev/null
@@ -0,0 +1,374 @@
+/*
+ * exynos_ppmu.c - EXYNOS PPMU (Platform Performance Monitoring Unit) support
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author : Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This driver is based on drivers/devfreq/exynos/exynos_ppmu.c
+ */
+
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/suspend.h>
+#include <linux/devfreq-event.h>
+
+#include "exynos-ppmu.h"
+
+struct exynos_ppmu_data {
+       void __iomem *base;
+       struct clk *clk;
+};
+
+struct exynos_ppmu {
+       struct devfreq_event_dev **edev;
+       struct devfreq_event_desc *desc;
+       unsigned int num_events;
+
+       struct device *dev;
+       struct mutex lock;
+
+       struct exynos_ppmu_data ppmu;
+};
+
+#define PPMU_EVENT(name)                       \
+       { "ppmu-event0-"#name, PPMU_PMNCNT0 },  \
+       { "ppmu-event1-"#name, PPMU_PMNCNT1 },  \
+       { "ppmu-event2-"#name, PPMU_PMNCNT2 },  \
+       { "ppmu-event3-"#name, PPMU_PMNCNT3 }
+
+struct __exynos_ppmu_events {
+       char *name;
+       int id;
+} ppmu_events[] = {
+       /* For Exynos3250, Exynos4 and Exynos5260 */
+       PPMU_EVENT(g3d),
+       PPMU_EVENT(fsys),
+
+       /* For Exynos4 SoCs and Exynos3250 */
+       PPMU_EVENT(dmc0),
+       PPMU_EVENT(dmc1),
+       PPMU_EVENT(cpu),
+       PPMU_EVENT(rightbus),
+       PPMU_EVENT(leftbus),
+       PPMU_EVENT(lcd0),
+       PPMU_EVENT(camif),
+
+       /* Only for Exynos3250 and Exynos5260 */
+       PPMU_EVENT(mfc),
+
+       /* Only for Exynos4 SoCs */
+       PPMU_EVENT(mfc-left),
+       PPMU_EVENT(mfc-right),
+
+       /* Only for Exynos5260 SoCs */
+       PPMU_EVENT(drex0-s0),
+       PPMU_EVENT(drex0-s1),
+       PPMU_EVENT(drex1-s0),
+       PPMU_EVENT(drex1-s1),
+       PPMU_EVENT(eagle),
+       PPMU_EVENT(kfc),
+       PPMU_EVENT(isp),
+       PPMU_EVENT(fimc),
+       PPMU_EVENT(gscl),
+       PPMU_EVENT(mscl),
+       PPMU_EVENT(fimd0x),
+       PPMU_EVENT(fimd1x),
+       { /* sentinel */ },
+};
+
+static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev)
+{
+       int i;
+
+       for (i = 0; i < ARRAY_SIZE(ppmu_events); i++)
+               if (!strcmp(edev->desc->name, ppmu_events[i].name))
+                       return ppmu_events[i].id;
+
+       return -EINVAL;
+}
+
+static int exynos_ppmu_disable(struct devfreq_event_dev *edev)
+{
+       struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
+       u32 pmnc;
+
+       /* Disable all counters */
+       __raw_writel(PPMU_CCNT_MASK |
+                    PPMU_PMCNT0_MASK |
+                    PPMU_PMCNT1_MASK |
+                    PPMU_PMCNT2_MASK |
+                    PPMU_PMCNT3_MASK,
+                    info->ppmu.base + PPMU_CNTENC);
+
+       /* Disable PPMU */
+       pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC);
+       pmnc &= ~PPMU_PMNC_ENABLE_MASK;
+       __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC);
+
+       return 0;
+}
+
+static int exynos_ppmu_set_event(struct devfreq_event_dev *edev)
+{
+       struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
+       int id = exynos_ppmu_find_ppmu_id(edev);
+       u32 pmnc, cntens;
+
+       if (id < 0)
+               return id;
+
+       /* Enable specific counter */
+       cntens = __raw_readl(info->ppmu.base + PPMU_CNTENS);
+       cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
+       __raw_writel(cntens, info->ppmu.base + PPMU_CNTENS);
+
+       /* Set the event of Read/Write data count  */
+       __raw_writel(PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT,
+                       info->ppmu.base + PPMU_BEVTxSEL(id));
+
+       /* Reset cycle counter/performance counter and enable PPMU */
+       pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC);
+       pmnc &= ~(PPMU_PMNC_ENABLE_MASK
+                       | PPMU_PMNC_COUNTER_RESET_MASK
+                       | PPMU_PMNC_CC_RESET_MASK);
+       pmnc |= (PPMU_ENABLE << PPMU_PMNC_ENABLE_SHIFT);
+       pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT);
+       pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT);
+       __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC);
+
+       return 0;
+}
+
+static int exynos_ppmu_get_event(struct devfreq_event_dev *edev,
+                               struct devfreq_event_data *edata)
+{
+       struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
+       int id = exynos_ppmu_find_ppmu_id(edev);
+       u32 pmnc, cntenc;
+
+       if (id < 0)
+               return -EINVAL;
+
+       /* Disable PPMU */
+       pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC);
+       pmnc &= ~PPMU_PMNC_ENABLE_MASK;
+       __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC);
+
+       /* Read cycle count */
+       edata->total_count = __raw_readl(info->ppmu.base + PPMU_CCNT);
+
+       /* Read performance count */
+       switch (id) {
+       case PPMU_PMNCNT0:
+       case PPMU_PMNCNT1:
+       case PPMU_PMNCNT2:
+               edata->load_count
+                       = __raw_readl(info->ppmu.base + PPMU_PMNCT(id));
+               break;
+       case PPMU_PMNCNT3:
+               edata->load_count =
+                       ((__raw_readl(info->ppmu.base + PPMU_PMCNT3_HIGH) << 8)
+                       | __raw_readl(info->ppmu.base + PPMU_PMCNT3_LOW));
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       /* Disable specific counter */
+       cntenc = __raw_readl(info->ppmu.base + PPMU_CNTENC);
+       cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
+       __raw_writel(cntenc, info->ppmu.base + PPMU_CNTENC);
+
+       dev_dbg(&edev->dev, "%s (event: %ld/%ld)\n", edev->desc->name,
+                                       edata->load_count, edata->total_count);
+
+       return 0;
+}
+
+static struct devfreq_event_ops exynos_ppmu_ops = {
+       .disable = exynos_ppmu_disable,
+       .set_event = exynos_ppmu_set_event,
+       .get_event = exynos_ppmu_get_event,
+};
+
+static int of_get_devfreq_events(struct device_node *np,
+                                struct exynos_ppmu *info)
+{
+       struct devfreq_event_desc *desc;
+       struct device *dev = info->dev;
+       struct device_node *events_np, *node;
+       int i, j, count;
+
+       events_np = of_get_child_by_name(np, "events");
+       if (!events_np) {
+               dev_err(dev,
+                       "failed to get child node of devfreq-event devices\n");
+               return -EINVAL;
+       }
+
+       count = of_get_child_count(events_np);
+       desc = devm_kzalloc(dev, sizeof(*desc) * count, GFP_KERNEL);
+       if (!desc)
+               return -ENOMEM;
+       info->num_events = count;
+
+       j = 0;
+       for_each_child_of_node(events_np, node) {
+               for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) {
+                       if (!ppmu_events[i].name)
+                               continue;
+
+                       if (!of_node_cmp(node->name, ppmu_events[i].name))
+                               break;
+               }
+
+               if (i == ARRAY_SIZE(ppmu_events)) {
+                       dev_warn(dev,
+                               "don't know how to configure events : %s\n",
+                               node->name);
+                       continue;
+               }
+
+               desc[j].ops = &exynos_ppmu_ops;
+               desc[j].driver_data = info;
+
+               of_property_read_string(node, "event-name", &desc[j].name);
+
+               j++;
+
+               of_node_put(node);
+       }
+       info->desc = desc;
+
+       of_node_put(events_np);
+
+       return 0;
+}
+
+static int exynos_ppmu_parse_dt(struct exynos_ppmu *info)
+{
+       struct device *dev = info->dev;
+       struct device_node *np = dev->of_node;
+       int ret = 0;
+
+       if (!np) {
+               dev_err(dev, "failed to find devicetree node\n");
+               return -EINVAL;
+       }
+
+       /* Maps the memory mapped IO to control PPMU register */
+       info->ppmu.base = of_iomap(np, 0);
+       if (IS_ERR_OR_NULL(info->ppmu.base)) {
+               dev_err(dev, "failed to map memory region\n");
+               return -ENOMEM;
+       }
+
+       info->ppmu.clk = devm_clk_get(dev, "ppmu");
+       if (IS_ERR(info->ppmu.clk)) {
+               info->ppmu.clk = NULL;
+               dev_warn(dev, "cannot get PPMU clock\n");
+       }
+
+       ret = of_get_devfreq_events(np, info);
+       if (ret < 0) {
+               dev_err(dev, "failed to parse exynos ppmu dt node\n");
+               goto err;
+       }
+
+       return 0;
+
+err:
+       iounmap(info->ppmu.base);
+
+       return ret;
+}
+
+static int exynos_ppmu_probe(struct platform_device *pdev)
+{
+       struct exynos_ppmu *info;
+       struct devfreq_event_dev **edev;
+       struct devfreq_event_desc *desc;
+       int i, ret = 0, size;
+
+       info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
+       if (!info)
+               return -ENOMEM;
+
+       mutex_init(&info->lock);
+       info->dev = &pdev->dev;
+
+       /* Parse dt data to get resource */
+       ret = exynos_ppmu_parse_dt(info);
+       if (ret < 0) {
+               dev_err(&pdev->dev,
+                       "failed to parse devicetree for resource\n");
+               return ret;
+       }
+       desc = info->desc;
+
+       size = sizeof(struct devfreq_event_dev *) * info->num_events;
+       info->edev = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
+       if (!info->edev) {
+               dev_err(&pdev->dev,
+                       "failed to allocate memory devfreq-event devices\n");
+               return -ENOMEM;
+       }
+       edev = info->edev;
+       platform_set_drvdata(pdev, info);
+
+       for (i = 0; i < info->num_events; i++) {
+               edev[i] = devm_devfreq_event_add_edev(&pdev->dev, &desc[i]);
+               if (IS_ERR(edev)) {
+                       ret = PTR_ERR(edev);
+                       dev_err(&pdev->dev,
+                               "failed to add devfreq-event device\n");
+                       goto err;
+               }
+       }
+
+       clk_prepare_enable(info->ppmu.clk);
+
+       return 0;
+err:
+       iounmap(info->ppmu.base);
+
+       return ret;
+}
+
+static int exynos_ppmu_remove(struct platform_device *pdev)
+{
+       struct exynos_ppmu *info = platform_get_drvdata(pdev);
+
+       clk_disable_unprepare(info->ppmu.clk);
+       iounmap(info->ppmu.base);
+
+       return 0;
+}
+
+static struct of_device_id exynos_ppmu_id_match[] = {
+       { .compatible = "samsung,exynos-ppmu", },
+       { /* sentinel */ },
+};
+
+static struct platform_driver exynos_ppmu_driver = {
+       .probe  = exynos_ppmu_probe,
+       .remove = exynos_ppmu_remove,
+       .driver = {
+               .name   = "exynos-ppmu",
+               .of_match_table = exynos_ppmu_id_match,
+       },
+};
+module_platform_driver(exynos_ppmu_driver);
+
+MODULE_DESCRIPTION("Exynos PPMU(Platform Performance Monitoring Unit) driver");
+MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/devfreq/event/exynos-ppmu.h b/drivers/devfreq/event/exynos-ppmu.h
new file mode 100644 (file)
index 0000000..4e831d4
--- /dev/null
@@ -0,0 +1,93 @@
+/*
+ * exynos_ppmu.h - EXYNOS PPMU header file
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author : Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __EXYNOS_PPMU_H__
+#define __EXYNOS_PPMU_H__
+
+enum ppmu_state {
+       PPMU_DISABLE = 0,
+       PPMU_ENABLE,
+};
+
+enum ppmu_counter {
+       PPMU_PMNCNT0 = 0,
+       PPMU_PMNCNT1,
+       PPMU_PMNCNT2,
+       PPMU_PMNCNT3,
+
+       PPMU_PMNCNT_MAX,
+};
+
+enum ppmu_event_type {
+       PPMU_RO_BUSY_CYCLE_CNT  = 0x0,
+       PPMU_WO_BUSY_CYCLE_CNT  = 0x1,
+       PPMU_RW_BUSY_CYCLE_CNT  = 0x2,
+       PPMU_RO_REQUEST_CNT     = 0x3,
+       PPMU_WO_REQUEST_CNT     = 0x4,
+       PPMU_RO_DATA_CNT        = 0x5,
+       PPMU_WO_DATA_CNT        = 0x6,
+       PPMU_RO_LATENCY         = 0x12,
+       PPMU_WO_LATENCY         = 0x16,
+};
+
+enum ppmu_reg {
+       /* PPC control register */
+       PPMU_PMNC               = 0x00,
+       PPMU_CNTENS             = 0x10,
+       PPMU_CNTENC             = 0x20,
+       PPMU_INTENS             = 0x30,
+       PPMU_INTENC             = 0x40,
+       PPMU_FLAG               = 0x50,
+
+       /* Cycle Counter and Performance Event Counter Register */
+       PPMU_CCNT               = 0x100,
+       PPMU_PMCNT0             = 0x110,
+       PPMU_PMCNT1             = 0x120,
+       PPMU_PMCNT2             = 0x130,
+       PPMU_PMCNT3_HIGH        = 0x140,
+       PPMU_PMCNT3_LOW         = 0x150,
+
+       /* Bus Event Generator */
+       PPMU_BEVT0SEL           = 0x1000,
+       PPMU_BEVT1SEL           = 0x1100,
+       PPMU_BEVT2SEL           = 0x1200,
+       PPMU_BEVT3SEL           = 0x1300,
+       PPMU_COUNTER_RESET      = 0x1810,
+       PPMU_READ_OVERFLOW_CNT  = 0x1810,
+       PPMU_READ_UNDERFLOW_CNT = 0x1814,
+       PPMU_WRITE_OVERFLOW_CNT = 0x1850,
+       PPMU_WRITE_UNDERFLOW_CNT = 0x1854,
+       PPMU_READ_PENDING_CNT   = 0x1880,
+       PPMU_WRITE_PENDING_CNT  = 0x1884
+};
+
+/* PMNC register */
+#define PPMU_PMNC_CC_RESET_SHIFT       2
+#define PPMU_PMNC_COUNTER_RESET_SHIFT  1
+#define PPMU_PMNC_ENABLE_SHIFT         0
+#define PPMU_PMNC_START_MODE_MASK      BIT(16)
+#define PPMU_PMNC_CC_DIVIDER_MASK      BIT(3)
+#define PPMU_PMNC_CC_RESET_MASK                BIT(2)
+#define PPMU_PMNC_COUNTER_RESET_MASK   BIT(1)
+#define PPMU_PMNC_ENABLE_MASK          BIT(0)
+
+/* CNTENS/CNTENC/INTENS/INTENC/FLAG register */
+#define PPMU_CCNT_MASK                 BIT(31)
+#define PPMU_PMCNT3_MASK               BIT(3)
+#define PPMU_PMCNT2_MASK               BIT(2)
+#define PPMU_PMCNT1_MASK               BIT(1)
+#define PPMU_PMCNT0_MASK               BIT(0)
+
+/* PPMU_PMNCTx/PPMU_BETxSEL registers */
+#define PPMU_PMNCT(x)                  (PPMU_PMCNT0 + (0x10 * x))
+#define PPMU_BEVTxSEL(x)               (PPMU_BEVT0SEL + (0x100 * x))
+
+#endif /* __EXYNOS_PPMU_H__ */
diff --git a/drivers/devfreq/tegra-devfreq.c b/drivers/devfreq/tegra-devfreq.c
new file mode 100644 (file)
index 0000000..3479096
--- /dev/null
@@ -0,0 +1,718 @@
+/*
+ * A devfreq driver for NVIDIA Tegra SoCs
+ *
+ * Copyright (c) 2014 NVIDIA CORPORATION. All rights reserved.
+ * Copyright (C) 2014 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/cpufreq.h>
+#include <linux/devfreq.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
+#include <linux/reset.h>
+
+#include "governor.h"
+
+#define ACTMON_GLB_STATUS                                      0x0
+#define ACTMON_GLB_PERIOD_CTRL                                 0x4
+
+#define ACTMON_DEV_CTRL                                                0x0
+#define ACTMON_DEV_CTRL_K_VAL_SHIFT                            10
+#define ACTMON_DEV_CTRL_ENB_PERIODIC                           BIT(18)
+#define ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN                     BIT(20)
+#define ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN                     BIT(21)
+#define ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT      23
+#define ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT      26
+#define ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN             BIT(29)
+#define ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN             BIT(30)
+#define ACTMON_DEV_CTRL_ENB                                    BIT(31)
+
+#define ACTMON_DEV_UPPER_WMARK                                 0x4
+#define ACTMON_DEV_LOWER_WMARK                                 0x8
+#define ACTMON_DEV_INIT_AVG                                    0xc
+#define ACTMON_DEV_AVG_UPPER_WMARK                             0x10
+#define ACTMON_DEV_AVG_LOWER_WMARK                             0x14
+#define ACTMON_DEV_COUNT_WEIGHT                                        0x18
+#define ACTMON_DEV_AVG_COUNT                                   0x20
+#define ACTMON_DEV_INTR_STATUS                                 0x24
+
+#define ACTMON_INTR_STATUS_CLEAR                               0xffffffff
+
+#define ACTMON_DEV_INTR_CONSECUTIVE_UPPER                      BIT(31)
+#define ACTMON_DEV_INTR_CONSECUTIVE_LOWER                      BIT(30)
+
+#define ACTMON_ABOVE_WMARK_WINDOW                              1
+#define ACTMON_BELOW_WMARK_WINDOW                              3
+#define ACTMON_BOOST_FREQ_STEP                                 16000
+
+/* activity counter is incremented every 256 memory transactions, and each
+ * transaction takes 4 EMC clocks for Tegra124; So the COUNT_WEIGHT is
+ * 4 * 256 = 1024.
+ */
+#define ACTMON_COUNT_WEIGHT                                    0x400
+
+/*
+ * ACTMON_AVERAGE_WINDOW_LOG2: default value for @DEV_CTRL_K_VAL, which
+ * translates to 2 ^ (K_VAL + 1). ex: 2 ^ (6 + 1) = 128
+ */
+#define ACTMON_AVERAGE_WINDOW_LOG2                     6
+#define ACTMON_SAMPLING_PERIOD                         12 /* ms */
+#define ACTMON_DEFAULT_AVG_BAND                                6  /* 1/10 of % */
+
+#define KHZ                                                    1000
+
+/* Assume that the bus is saturated if the utilization is 25% */
+#define BUS_SATURATION_RATIO                                   25
+
+/**
+ * struct tegra_devfreq_device_config - configuration specific to an ACTMON
+ * device
+ *
+ * Coefficients and thresholds are in %
+ */
+struct tegra_devfreq_device_config {
+       u32             offset;
+       u32             irq_mask;
+
+       unsigned int    boost_up_coeff;
+       unsigned int    boost_down_coeff;
+       unsigned int    boost_up_threshold;
+       unsigned int    boost_down_threshold;
+       u32             avg_dependency_threshold;
+};
+
+enum tegra_actmon_device {
+       MCALL = 0,
+       MCCPU,
+};
+
+static struct tegra_devfreq_device_config actmon_device_configs[] = {
+       {
+               /* MCALL */
+               .offset = 0x1c0,
+               .irq_mask = 1 << 26,
+               .boost_up_coeff = 200,
+               .boost_down_coeff = 50,
+               .boost_up_threshold = 60,
+               .boost_down_threshold = 40,
+       },
+       {
+               /* MCCPU */
+               .offset = 0x200,
+               .irq_mask = 1 << 25,
+               .boost_up_coeff = 800,
+               .boost_down_coeff = 90,
+               .boost_up_threshold = 27,
+               .boost_down_threshold = 10,
+               .avg_dependency_threshold = 50000,
+       },
+};
+
+/**
+ * struct tegra_devfreq_device - state specific to an ACTMON device
+ *
+ * Frequencies are in kHz.
+ */
+struct tegra_devfreq_device {
+       const struct tegra_devfreq_device_config *config;
+
+       void __iomem    *regs;
+       u32             avg_band_freq;
+       u32             avg_count;
+
+       unsigned long   target_freq;
+       unsigned long   boost_freq;
+};
+
+struct tegra_devfreq {
+       struct devfreq          *devfreq;
+
+       struct platform_device  *pdev;
+       struct reset_control    *reset;
+       struct clk              *clock;
+       void __iomem            *regs;
+
+       spinlock_t              lock;
+
+       struct clk              *emc_clock;
+       unsigned long           max_freq;
+       unsigned long           cur_freq;
+       struct notifier_block   rate_change_nb;
+
+       struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)];
+};
+
+struct tegra_actmon_emc_ratio {
+       unsigned long cpu_freq;
+       unsigned long emc_freq;
+};
+
+static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
+       { 1400000, ULONG_MAX },
+       { 1200000,    750000 },
+       { 1100000,    600000 },
+       { 1000000,    500000 },
+       {  800000,    375000 },
+       {  500000,    200000 },
+       {  250000,    100000 },
+};
+
+static unsigned long do_percent(unsigned long val, unsigned int pct)
+{
+       return val * pct / 100;
+}
+
+static void tegra_devfreq_update_avg_wmark(struct tegra_devfreq_device *dev)
+{
+       u32 avg = dev->avg_count;
+       u32 band = dev->avg_band_freq * ACTMON_SAMPLING_PERIOD;
+
+       writel(avg + band, dev->regs + ACTMON_DEV_AVG_UPPER_WMARK);
+       avg = max(avg, band);
+       writel(avg - band, dev->regs + ACTMON_DEV_AVG_LOWER_WMARK);
+}
+
+static void tegra_devfreq_update_wmark(struct tegra_devfreq *tegra,
+                                      struct tegra_devfreq_device *dev)
+{
+       u32 val = tegra->cur_freq * ACTMON_SAMPLING_PERIOD;
+
+       writel(do_percent(val, dev->config->boost_up_threshold),
+              dev->regs + ACTMON_DEV_UPPER_WMARK);
+
+       writel(do_percent(val, dev->config->boost_down_threshold),
+              dev->regs + ACTMON_DEV_LOWER_WMARK);
+}
+
+static void actmon_write_barrier(struct tegra_devfreq *tegra)
+{
+       /* ensure the update has reached the ACTMON */
+       wmb();
+       readl(tegra->regs + ACTMON_GLB_STATUS);
+}
+
+static irqreturn_t actmon_isr(int irq, void *data)
+{
+       struct tegra_devfreq *tegra = data;
+       struct tegra_devfreq_device *dev = NULL;
+       unsigned long flags;
+       u32 val;
+       unsigned int i;
+
+       val = readl(tegra->regs + ACTMON_GLB_STATUS);
+
+       for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
+               if (val & tegra->devices[i].config->irq_mask) {
+                       dev = tegra->devices + i;
+                       break;
+               }
+       }
+
+       if (!dev)
+               return IRQ_NONE;
+
+       spin_lock_irqsave(&tegra->lock, flags);
+
+       dev->avg_count = readl(dev->regs + ACTMON_DEV_AVG_COUNT);
+       tegra_devfreq_update_avg_wmark(dev);
+
+       val = readl(dev->regs + ACTMON_DEV_INTR_STATUS);
+       if (val & ACTMON_DEV_INTR_CONSECUTIVE_UPPER) {
+               val = readl(dev->regs + ACTMON_DEV_CTRL) |
+                       ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN |
+                       ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
+
+               /*
+                * new_boost = min(old_boost * up_coef + step, max_freq)
+                */
+               dev->boost_freq = do_percent(dev->boost_freq,
+                                            dev->config->boost_up_coeff);
+               dev->boost_freq += ACTMON_BOOST_FREQ_STEP;
+               if (dev->boost_freq >= tegra->max_freq) {
+                       dev->boost_freq = tegra->max_freq;
+                       val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
+               }
+               writel(val, dev->regs + ACTMON_DEV_CTRL);
+       } else if (val & ACTMON_DEV_INTR_CONSECUTIVE_LOWER) {
+               val = readl(dev->regs + ACTMON_DEV_CTRL) |
+                       ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN |
+                       ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
+
+               /*
+                * new_boost = old_boost * down_coef
+                * or 0 if (old_boost * down_coef < step / 2)
+                */
+               dev->boost_freq = do_percent(dev->boost_freq,
+                                            dev->config->boost_down_coeff);
+               if (dev->boost_freq < (ACTMON_BOOST_FREQ_STEP >> 1)) {
+                       dev->boost_freq = 0;
+                       val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
+               }
+               writel(val, dev->regs + ACTMON_DEV_CTRL);
+       }
+
+       if (dev->config->avg_dependency_threshold) {
+               val = readl(dev->regs + ACTMON_DEV_CTRL);
+               if (dev->avg_count >= dev->config->avg_dependency_threshold)
+                       val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
+               else if (dev->boost_freq == 0)
+                       val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
+               writel(val, dev->regs + ACTMON_DEV_CTRL);
+       }
+
+       writel(ACTMON_INTR_STATUS_CLEAR, dev->regs + ACTMON_DEV_INTR_STATUS);
+
+       actmon_write_barrier(tegra);
+
+       spin_unlock_irqrestore(&tegra->lock, flags);
+
+       return IRQ_WAKE_THREAD;
+}
+
+static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra,
+                                           unsigned long cpu_freq)
+{
+       unsigned int i;
+       struct tegra_actmon_emc_ratio *ratio = actmon_emc_ratios;
+
+       for (i = 0; i < ARRAY_SIZE(actmon_emc_ratios); i++, ratio++) {
+               if (cpu_freq >= ratio->cpu_freq) {
+                       if (ratio->emc_freq >= tegra->max_freq)
+                               return tegra->max_freq;
+                       else
+                               return ratio->emc_freq;
+               }
+       }
+
+       return 0;
+}
+
+static void actmon_update_target(struct tegra_devfreq *tegra,
+                                struct tegra_devfreq_device *dev)
+{
+       unsigned long cpu_freq = 0;
+       unsigned long static_cpu_emc_freq = 0;
+       unsigned int avg_sustain_coef;
+       unsigned long flags;
+
+       if (dev->config->avg_dependency_threshold) {
+               cpu_freq = cpufreq_get(0);
+               static_cpu_emc_freq = actmon_cpu_to_emc_rate(tegra, cpu_freq);
+       }
+
+       spin_lock_irqsave(&tegra->lock, flags);
+
+       dev->target_freq = dev->avg_count / ACTMON_SAMPLING_PERIOD;
+       avg_sustain_coef = 100 * 100 / dev->config->boost_up_threshold;
+       dev->target_freq = do_percent(dev->target_freq, avg_sustain_coef);
+       dev->target_freq += dev->boost_freq;
+
+       if (dev->avg_count >= dev->config->avg_dependency_threshold)
+               dev->target_freq = max(dev->target_freq, static_cpu_emc_freq);
+
+       spin_unlock_irqrestore(&tegra->lock, flags);
+}
+
+static irqreturn_t actmon_thread_isr(int irq, void *data)
+{
+       struct tegra_devfreq *tegra = data;
+
+       mutex_lock(&tegra->devfreq->lock);
+       update_devfreq(tegra->devfreq);
+       mutex_unlock(&tegra->devfreq->lock);
+
+       return IRQ_HANDLED;
+}
+
+static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
+                                      unsigned long action, void *ptr)
+{
+       struct clk_notifier_data *data = ptr;
+       struct tegra_devfreq *tegra = container_of(nb, struct tegra_devfreq,
+                                                  rate_change_nb);
+       unsigned int i;
+       unsigned long flags;
+
+       spin_lock_irqsave(&tegra->lock, flags);
+
+       switch (action) {
+       case POST_RATE_CHANGE:
+               tegra->cur_freq = data->new_rate / KHZ;
+
+               for (i = 0; i < ARRAY_SIZE(tegra->devices); i++)
+                       tegra_devfreq_update_wmark(tegra, tegra->devices + i);
+
+               actmon_write_barrier(tegra);
+               break;
+       case PRE_RATE_CHANGE:
+               /* fall through */
+       case ABORT_RATE_CHANGE:
+               break;
+       };
+
+       spin_unlock_irqrestore(&tegra->lock, flags);
+
+       return NOTIFY_OK;
+}
+
+static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
+                                         struct tegra_devfreq_device *dev)
+{
+       u32 val;
+
+       dev->avg_band_freq = tegra->max_freq * ACTMON_DEFAULT_AVG_BAND / KHZ;
+       dev->target_freq = tegra->cur_freq;
+
+       dev->avg_count = tegra->cur_freq * ACTMON_SAMPLING_PERIOD;
+       writel(dev->avg_count, dev->regs + ACTMON_DEV_INIT_AVG);
+
+       tegra_devfreq_update_avg_wmark(dev);
+       tegra_devfreq_update_wmark(tegra, dev);
+
+       writel(ACTMON_COUNT_WEIGHT, dev->regs + ACTMON_DEV_COUNT_WEIGHT);
+       writel(ACTMON_INTR_STATUS_CLEAR, dev->regs + ACTMON_DEV_INTR_STATUS);
+
+       val = 0;
+       val |= ACTMON_DEV_CTRL_ENB_PERIODIC |
+              ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN |
+              ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
+       val |= (ACTMON_AVERAGE_WINDOW_LOG2 - 1)
+               << ACTMON_DEV_CTRL_K_VAL_SHIFT;
+       val |= (ACTMON_BELOW_WMARK_WINDOW - 1)
+               << ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT;
+       val |= (ACTMON_ABOVE_WMARK_WINDOW - 1)
+               << ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT;
+       val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN |
+              ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
+
+       writel(val, dev->regs + ACTMON_DEV_CTRL);
+
+       actmon_write_barrier(tegra);
+
+       val = readl(dev->regs + ACTMON_DEV_CTRL);
+       val |= ACTMON_DEV_CTRL_ENB;
+       writel(val, dev->regs + ACTMON_DEV_CTRL);
+
+       actmon_write_barrier(tegra);
+}
+
+static int tegra_devfreq_suspend(struct device *dev)
+{
+       struct platform_device *pdev;
+       struct tegra_devfreq *tegra;
+       struct tegra_devfreq_device *actmon_dev;
+       unsigned int i;
+       u32 val;
+
+       pdev = container_of(dev, struct platform_device, dev);
+       tegra = platform_get_drvdata(pdev);
+
+       for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
+               actmon_dev = &tegra->devices[i];
+
+               val = readl(actmon_dev->regs + ACTMON_DEV_CTRL);
+               val &= ~ACTMON_DEV_CTRL_ENB;
+               writel(val, actmon_dev->regs + ACTMON_DEV_CTRL);
+
+               writel(ACTMON_INTR_STATUS_CLEAR,
+                      actmon_dev->regs + ACTMON_DEV_INTR_STATUS);
+
+               actmon_write_barrier(tegra);
+       }
+
+       return 0;
+}
+
+static int tegra_devfreq_resume(struct device *dev)
+{
+       struct platform_device *pdev;
+       struct tegra_devfreq *tegra;
+       struct tegra_devfreq_device *actmon_dev;
+       unsigned int i;
+
+       pdev = container_of(dev, struct platform_device, dev);
+       tegra = platform_get_drvdata(pdev);
+
+       for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
+               actmon_dev = &tegra->devices[i];
+
+               tegra_actmon_configure_device(tegra, actmon_dev);
+       }
+
+       return 0;
+}
+
+static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
+                               u32 flags)
+{
+       struct platform_device *pdev;
+       struct tegra_devfreq *tegra;
+       struct dev_pm_opp *opp;
+       unsigned long rate = *freq * KHZ;
+
+       pdev = container_of(dev, struct platform_device, dev);
+       tegra = platform_get_drvdata(pdev);
+
+       rcu_read_lock();
+       opp = devfreq_recommended_opp(dev, &rate, flags);
+       if (IS_ERR(opp)) {
+               rcu_read_unlock();
+               dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
+               return PTR_ERR(opp);
+       }
+       rate = dev_pm_opp_get_freq(opp);
+       rcu_read_unlock();
+
+       /* TODO: Once we have per-user clk constraints, set a floor */
+       clk_set_rate(tegra->emc_clock, rate);
+
+       /* TODO: Set voltage as well */
+
+       return 0;
+}
+
+static int tegra_devfreq_get_dev_status(struct device *dev,
+                                       struct devfreq_dev_status *stat)
+{
+       struct platform_device *pdev;
+       struct tegra_devfreq *tegra;
+       struct tegra_devfreq_device *actmon_dev;
+
+       pdev = container_of(dev, struct platform_device, dev);
+       tegra = platform_get_drvdata(pdev);
+
+       stat->current_frequency = tegra->cur_freq;
+
+       /* To be used by the tegra governor */
+       stat->private_data = tegra;
+
+       /* The below are to be used by the other governors */
+
+       actmon_dev = &tegra->devices[MCALL];
+
+       /* Number of cycles spent on memory access */
+       stat->busy_time = actmon_dev->avg_count;
+
+       /* The bus can be considered to be saturated way before 100% */
+       stat->busy_time *= 100 / BUS_SATURATION_RATIO;
+
+       /* Number of cycles in a sampling period */
+       stat->total_time = ACTMON_SAMPLING_PERIOD * tegra->cur_freq;
+
+       return 0;
+}
+
+static int tegra_devfreq_get_target(struct devfreq *devfreq,
+                                   unsigned long *freq)
+{
+       struct devfreq_dev_status stat;
+       struct tegra_devfreq *tegra;
+       struct tegra_devfreq_device *dev;
+       unsigned long target_freq = 0;
+       unsigned int i;
+       int err;
+
+       err = devfreq->profile->get_dev_status(devfreq->dev.parent, &stat);
+       if (err)
+               return err;
+
+       tegra = stat.private_data;
+
+       for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
+               dev = &tegra->devices[i];
+
+               actmon_update_target(tegra, dev);
+
+               target_freq = max(target_freq, dev->target_freq);
+       }
+
+       *freq = target_freq;
+
+       return 0;
+}
+
+static int tegra_devfreq_event_handler(struct devfreq *devfreq,
+                                      unsigned int event, void *data)
+{
+       return 0;
+}
+
+static struct devfreq_governor tegra_devfreq_governor = {
+       .name = "tegra",
+       .get_target_freq = tegra_devfreq_get_target,
+       .event_handler = tegra_devfreq_event_handler,
+};
+
+static struct devfreq_dev_profile tegra_devfreq_profile = {
+       .polling_ms     = 0,
+       .target         = tegra_devfreq_target,
+       .get_dev_status = tegra_devfreq_get_dev_status,
+};
+
+static int tegra_devfreq_probe(struct platform_device *pdev)
+{
+       struct tegra_devfreq *tegra;
+       struct tegra_devfreq_device *dev;
+       struct resource *res;
+       unsigned long max_freq;
+       unsigned int i;
+       int irq;
+       int err;
+
+       tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
+       if (!tegra)
+               return -ENOMEM;
+
+       spin_lock_init(&tegra->lock);
+
+       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+       if (!res) {
+               dev_err(&pdev->dev, "Failed to get regs resource\n");
+               return -ENODEV;
+       }
+
+       tegra->regs = devm_ioremap_resource(&pdev->dev, res);
+       if (IS_ERR(tegra->regs)) {
+               dev_err(&pdev->dev, "Failed to get IO memory\n");
+               return PTR_ERR(tegra->regs);
+       }
+
+       tegra->reset = devm_reset_control_get(&pdev->dev, "actmon");
+       if (IS_ERR(tegra->reset)) {
+               dev_err(&pdev->dev, "Failed to get reset\n");
+               return PTR_ERR(tegra->reset);
+       }
+
+       tegra->clock = devm_clk_get(&pdev->dev, "actmon");
+       if (IS_ERR(tegra->clock)) {
+               dev_err(&pdev->dev, "Failed to get actmon clock\n");
+               return PTR_ERR(tegra->clock);
+       }
+
+       tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
+       if (IS_ERR(tegra->emc_clock)) {
+               dev_err(&pdev->dev, "Failed to get emc clock\n");
+               return PTR_ERR(tegra->emc_clock);
+       }
+
+       err = of_init_opp_table(&pdev->dev);
+       if (err) {
+               dev_err(&pdev->dev, "Failed to init operating point table\n");
+               return err;
+       }
+
+       tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb;
+       err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb);
+       if (err) {
+               dev_err(&pdev->dev,
+                       "Failed to register rate change notifier\n");
+               return err;
+       }
+
+       reset_control_assert(tegra->reset);
+
+       err = clk_prepare_enable(tegra->clock);
+       if (err) {
+               reset_control_deassert(tegra->reset);
+               return err;
+       }
+
+       reset_control_deassert(tegra->reset);
+
+       max_freq = clk_round_rate(tegra->emc_clock, ULONG_MAX);
+       tegra->max_freq = max_freq / KHZ;
+
+       clk_set_rate(tegra->emc_clock, max_freq);
+
+       tegra->cur_freq = clk_get_rate(tegra->emc_clock) / KHZ;
+
+       writel(ACTMON_SAMPLING_PERIOD - 1,
+              tegra->regs + ACTMON_GLB_PERIOD_CTRL);
+
+       for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) {
+               dev = tegra->devices + i;
+               dev->config = actmon_device_configs + i;
+               dev->regs = tegra->regs + dev->config->offset;
+
+               tegra_actmon_configure_device(tegra, tegra->devices + i);
+       }
+
+       err = devfreq_add_governor(&tegra_devfreq_governor);
+       if (err) {
+               dev_err(&pdev->dev, "Failed to add governor\n");
+               return err;
+       }
+
+       tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock);
+       tegra->devfreq = devm_devfreq_add_device(&pdev->dev,
+                                                &tegra_devfreq_profile,
+                                                "tegra",
+                                                NULL);
+
+       irq = platform_get_irq(pdev, 0);
+       err = devm_request_threaded_irq(&pdev->dev, irq, actmon_isr,
+                                       actmon_thread_isr, IRQF_SHARED,
+                                       "tegra-devfreq", tegra);
+       if (err) {
+               dev_err(&pdev->dev, "Interrupt request failed\n");
+               return err;
+       }
+
+       platform_set_drvdata(pdev, tegra);
+
+       return 0;
+}
+
+static int tegra_devfreq_remove(struct platform_device *pdev)
+{
+       struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
+
+       clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb);
+
+       clk_disable_unprepare(tegra->clock);
+
+       return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(tegra_devfreq_pm_ops,
+                        tegra_devfreq_suspend,
+                        tegra_devfreq_resume);
+
+static struct of_device_id tegra_devfreq_of_match[] = {
+       { .compatible = "nvidia,tegra124-actmon" },
+       { },
+};
+
+static struct platform_driver tegra_devfreq_driver = {
+       .probe  = tegra_devfreq_probe,
+       .remove = tegra_devfreq_remove,
+       .driver = {
+               .name           = "tegra-devfreq",
+               .owner          = THIS_MODULE,
+               .of_match_table = tegra_devfreq_of_match,
+               .pm             = &tegra_devfreq_pm_ops,
+       },
+};
+module_platform_driver(tegra_devfreq_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Tegra devfreq driver");
+MODULE_AUTHOR("Tomeu Vizoso <tomeu.vizoso@collabora.com>");
+MODULE_DEVICE_TABLE(of, tegra_devfreq_of_match);
index de361a156b341ab85a0b22490f2ff9407c93ce30..5a635646e05cfe87a9f92083c5864827c669ccee 100644 (file)
@@ -43,7 +43,7 @@ static int acpi_dma_parse_resource_group(const struct acpi_csrt_group *grp,
 {
        const struct acpi_csrt_shared_info *si;
        struct list_head resource_list;
-       struct resource_list_entry *rentry;
+       struct resource_entry *rentry;
        resource_size_t mem = 0, irq = 0;
        int ret;
 
@@ -56,10 +56,10 @@ static int acpi_dma_parse_resource_group(const struct acpi_csrt_group *grp,
                return 0;
 
        list_for_each_entry(rentry, &resource_list, node) {
-               if (resource_type(&rentry->res) == IORESOURCE_MEM)
-                       mem = rentry->res.start;
-               else if (resource_type(&rentry->res) == IORESOURCE_IRQ)
-                       irq = rentry->res.start;
+               if (resource_type(rentry->res) == IORESOURCE_MEM)
+                       mem = rentry->res->start;
+               else if (resource_type(rentry->res) == IORESOURCE_IRQ)
+                       irq = rentry->res->start;
        }
 
        acpi_dev_free_resource_list(&resource_list);
index 4d6b26979fbd54e457dfbcea4bfc9a6a26ec846c..bb3725b672cf0d58d4eb9d58efca7f85d8b32ef7 100644 (file)
@@ -861,8 +861,8 @@ static acpi_status vmbus_walk_resources(struct acpi_resource *res, void *ctx)
                break;
 
        case ACPI_RESOURCE_TYPE_ADDRESS64:
-               hyperv_mmio.start = res->data.address64.minimum;
-               hyperv_mmio.end = res->data.address64.maximum;
+               hyperv_mmio.start = res->data.address64.address.minimum;
+               hyperv_mmio.end = res->data.address64.address.maximum;
                break;
        }
 
index 6dbf6fcbdfaf1671163eba8b8345c21a5566b173..e8902f8dddfce6f664ab616b9b61ea2777ba2401 100644 (file)
@@ -386,7 +386,7 @@ static int __init pcc_init(void)
        ret = acpi_pcc_probe();
 
        if (ret) {
-               pr_err("ACPI PCC probe failed.\n");
+               pr_debug("ACPI PCC probe failed.\n");
                return -ENODEV;
        }
 
@@ -394,7 +394,7 @@ static int __init pcc_init(void)
                        pcc_mbox_probe, NULL, 0, NULL, 0);
 
        if (!pcc_pdev) {
-               pr_err("Err creating PCC platform bundle\n");
+               pr_debug("Err creating PCC platform bundle\n");
                return -ENODEV;
        }
 
index 60dc36c865b5d77f77966cc8ba9be7a863f25cea..110fece2ff537238f999cee084451964293bf429 100644 (file)
@@ -140,7 +140,7 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
                        unsigned char busno, unsigned char bus_max,
                        struct list_head *resources, resource_size_t *io_base)
 {
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        struct resource *res;
        struct resource *bus_range;
        struct of_pci_range range;
@@ -226,7 +226,7 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
 conversion_failed:
        kfree(res);
 parse_failed:
-       list_for_each_entry(window, resources, list)
+       resource_list_for_each_entry(window, resources)
                kfree(window->res);
        pci_free_resource_list(resources);
        kfree(bus_range);
index 8fb16188cd82aaff9d346a70f46e0257e468fe29..90fa3a78fb7ce18018566f2f1f63e9c9f9be1dfd 100644 (file)
 void pci_add_resource_offset(struct list_head *resources, struct resource *res,
                             resource_size_t offset)
 {
-       struct pci_host_bridge_window *window;
+       struct resource_entry *entry;
 
-       window = kzalloc(sizeof(struct pci_host_bridge_window), GFP_KERNEL);
-       if (!window) {
+       entry = resource_list_create_entry(res, 0);
+       if (!entry) {
                printk(KERN_ERR "PCI: can't add host bridge window %pR\n", res);
                return;
        }
 
-       window->res = res;
-       window->offset = offset;
-       list_add_tail(&window->list, resources);
+       entry->offset = offset;
+       resource_list_add_tail(entry, resources);
 }
 EXPORT_SYMBOL(pci_add_resource_offset);
 
@@ -42,12 +41,7 @@ EXPORT_SYMBOL(pci_add_resource);
 
 void pci_free_resource_list(struct list_head *resources)
 {
-       struct pci_host_bridge_window *window, *tmp;
-
-       list_for_each_entry_safe(window, tmp, resources, list) {
-               list_del(&window->list);
-               kfree(window);
-       }
+       resource_list_free(resources);
 }
 EXPORT_SYMBOL(pci_free_resource_list);
 
index 0e5f3c95af5bd75d288d2067b79aacc6abd65a94..39b2dbe585aae651c930704eb4f1f53a68c66203 100644 (file)
@@ -35,10 +35,10 @@ void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region,
                             struct resource *res)
 {
        struct pci_host_bridge *bridge = find_pci_host_bridge(bus);
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        resource_size_t offset = 0;
 
-       list_for_each_entry(window, &bridge->windows, list) {
+       resource_list_for_each_entry(window, &bridge->windows) {
                if (resource_contains(window->res, res)) {
                        offset = window->offset;
                        break;
@@ -60,10 +60,10 @@ void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
                             struct pci_bus_region *region)
 {
        struct pci_host_bridge *bridge = find_pci_host_bridge(bus);
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        resource_size_t offset = 0;
 
-       list_for_each_entry(window, &bridge->windows, list) {
+       resource_list_for_each_entry(window, &bridge->windows) {
                struct pci_bus_region bus_region;
 
                if (resource_type(res) != resource_type(window->res))
index 925e29e3d4c80a208cdf80eeff06f9753c44834e..ba46e581db99925f9deaf3a392fff4d42d72a8e4 100644 (file)
@@ -103,14 +103,14 @@ static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci)
        struct device *dev = pci->host.dev.parent;
        struct device_node *np = dev->of_node;
        resource_size_t iobase;
-       struct pci_host_bridge_window *win;
+       struct resource_entry *win;
 
        err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources,
                                               &iobase);
        if (err)
                return err;
 
-       list_for_each_entry(win, &pci->resources, list) {
+       resource_list_for_each_entry(win, &pci->resources) {
                struct resource *parent, *res = win->res;
 
                switch (resource_type(res)) {
index 341529ca23e8e570c38cca1359eb02c4a4beb472..1ec694a52379ea4c753cc1b397c931bc940bb1cc 100644 (file)
@@ -74,13 +74,13 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
        int err, mem = 1, res_valid = 0;
        struct device_node *np = dev->of_node;
        resource_size_t iobase;
-       struct pci_host_bridge_window *win;
+       struct resource_entry *win;
 
        err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase);
        if (err)
                return err;
 
-       list_for_each_entry(win, res, list) {
+       resource_list_for_each_entry(win, res, list) {
                struct resource *parent, *res = win->res;
 
                switch (resource_type(res)) {
index e77d831dc24136eedfe80568dbff8840b2567f51..aab55474dd0d6a4bd661ccbb160f1c2e8dc3c69c 100644 (file)
@@ -269,11 +269,11 @@ static int xgene_pcie_map_ranges(struct xgene_pcie_port *port,
                                 struct list_head *res,
                                 resource_size_t io_base)
 {
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        struct device *dev = port->dev;
        int ret;
 
-       list_for_each_entry(window, res, list) {
+       resource_list_for_each_entry(window, res) {
                struct resource *res = window->res;
                u64 restype = resource_type(res);
 
index eac4a4b957caa7062d5bfbfe39b4e58ea5cb3b81..f1a06a091ccb578359656b1d9c4b32c0e99b2d99 100644 (file)
@@ -667,7 +667,7 @@ static int xilinx_pcie_parse_and_add_res(struct xilinx_pcie_port *port)
        resource_size_t offset;
        struct of_pci_range_parser parser;
        struct of_pci_range range;
-       struct pci_host_bridge_window *win;
+       struct resource_entry *win;
        int err = 0, mem_resno = 0;
 
        /* Get the ranges */
@@ -737,7 +737,7 @@ static int xilinx_pcie_parse_and_add_res(struct xilinx_pcie_port *port)
 
 free_resources:
        release_child_resources(&iomem_resource);
-       list_for_each_entry(win, &port->resources, list)
+       resource_list_for_each_entry(win, &port->resources)
                devm_kfree(dev, win->res);
        pci_free_resource_list(&port->resources);
 
index bada209998705732e488a9e5e4dbb0139a43ad90..c32fb786d48e3c76f37b25e4b422e5796e884acf 100644 (file)
@@ -475,7 +475,7 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
        struct slot *slot = bss_hotplug_slot->private;
        struct pci_dev *dev, *temp;
        int rc;
-       acpi_owner_id ssdt_id = 0;
+       acpi_handle ssdt_hdl = NULL;
 
        /* Acquire update access to the bus */
        mutex_lock(&sn_hotplug_mutex);
@@ -522,7 +522,7 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
                        if (ACPI_SUCCESS(ret) &&
                            (adr>>16) == (slot->device_num + 1)) {
                                /* retain the owner id */
-                               acpi_get_id(chandle, &ssdt_id);
+                               ssdt_hdl = chandle;
 
                                ret = acpi_bus_get_device(chandle,
                                                          &device);
@@ -547,12 +547,13 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
        pci_unlock_rescan_remove();
 
        /* Remove the SSDT for the slot from the ACPI namespace */
-       if (SN_ACPI_BASE_SUPPORT() && ssdt_id) {
+       if (SN_ACPI_BASE_SUPPORT() && ssdt_hdl) {
                acpi_status ret;
-               ret = acpi_unload_table_id(ssdt_id);
+               ret = acpi_unload_parent_table(ssdt_hdl);
                if (ACPI_FAILURE(ret)) {
-                       printk(KERN_ERR "%s: acpi_unload_table_id failed (0x%x) for id %d\n",
-                              __func__, ret, ssdt_id);
+                       acpi_handle_err(ssdt_hdl,
+                                       "%s: acpi_unload_parent_table failed (0x%x)\n",
+                                       __func__, ret);
                        /* try to continue on */
                }
        }
index 3542150fc8a3fc247cd99777f588d30a93a99c41..4890639873256812b721d83f304dff634ef58da1 100644 (file)
@@ -501,12 +501,29 @@ static int acpi_pci_run_wake(struct pci_dev *dev, bool enable)
        return 0;
 }
 
+static bool acpi_pci_need_resume(struct pci_dev *dev)
+{
+       struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+
+       if (!adev || !acpi_device_power_manageable(adev))
+               return false;
+
+       if (device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count)
+               return true;
+
+       if (acpi_target_system_state() == ACPI_STATE_S0)
+               return false;
+
+       return !!adev->power.flags.dsw_present;
+}
+
 static struct pci_platform_pm_ops acpi_pci_platform_pm = {
        .is_manageable = acpi_pci_power_manageable,
        .set_state = acpi_pci_set_power_state,
        .choose_state = acpi_pci_choose_state,
        .sleep_wake = acpi_pci_sleep_wake,
        .run_wake = acpi_pci_run_wake,
+       .need_resume = acpi_pci_need_resume,
 };
 
 void acpi_pci_add_bus(struct pci_bus *bus)
index 09a66bad801838052438a6839cff914cf2016c58..3cb2210de5530e3edffd3286df3a66f455f6b2ec 100644 (file)
@@ -653,7 +653,6 @@ static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev)
 static int pci_pm_prepare(struct device *dev)
 {
        struct device_driver *drv = dev->driver;
-       int error = 0;
 
        /*
         * Devices having power.ignore_children set may still be necessary for
@@ -662,10 +661,12 @@ static int pci_pm_prepare(struct device *dev)
        if (dev->power.ignore_children)
                pm_runtime_resume(dev);
 
-       if (drv && drv->pm && drv->pm->prepare)
-               error = drv->pm->prepare(dev);
-
-       return error;
+       if (drv && drv->pm && drv->pm->prepare) {
+               int error = drv->pm->prepare(dev);
+               if (error)
+                       return error;
+       }
+       return pci_dev_keep_suspended(to_pci_dev(dev));
 }
 
 
index 460d046ab6fed855d546962b035b76a132da7a47..81f06e8dcc0449688e05b65504413320628b1579 100644 (file)
@@ -523,6 +523,11 @@ static inline int platform_pci_run_wake(struct pci_dev *dev, bool enable)
                        pci_platform_pm->run_wake(dev, enable) : -ENODEV;
 }
 
+static inline bool platform_pci_need_resume(struct pci_dev *dev)
+{
+       return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false;
+}
+
 /**
  * pci_raw_set_power_state - Use PCI PM registers to set the power state of
  *                           given PCI device
@@ -2001,6 +2006,27 @@ bool pci_dev_run_wake(struct pci_dev *dev)
 }
 EXPORT_SYMBOL_GPL(pci_dev_run_wake);
 
+/**
+ * pci_dev_keep_suspended - Check if the device can stay in the suspended state.
+ * @pci_dev: Device to check.
+ *
+ * Return 'true' if the device is runtime-suspended, it doesn't have to be
+ * reconfigured due to wakeup settings difference between system and runtime
+ * suspend and the current power state of it is suitable for the upcoming
+ * (system) transition.
+ */
+bool pci_dev_keep_suspended(struct pci_dev *pci_dev)
+{
+       struct device *dev = &pci_dev->dev;
+
+       if (!pm_runtime_suspended(dev)
+           || (device_can_wakeup(dev) && !device_may_wakeup(dev))
+           || platform_pci_need_resume(pci_dev))
+               return false;
+
+       return pci_target_state(pci_dev) == pci_dev->current_state;
+}
+
 void pci_config_pm_runtime_get(struct pci_dev *pdev)
 {
        struct device *dev = &pdev->dev;
index d54632a1db43cbed30d89fb91b84fef489cbc4a4..4091f82239cdb86141bb3dfd34e583bff8b027db 100644 (file)
@@ -50,6 +50,10 @@ int pci_probe_reset_function(struct pci_dev *dev);
  *             for given device (the device's wake-up capability has to be
  *             enabled by @sleep_wake for this feature to work)
  *
+ * @need_resume: returns 'true' if the given device (which is currently
+ *             suspended) needs to be resumed to be configured for system
+ *             wakeup.
+ *
  * If given platform is generally capable of power managing PCI devices, all of
  * these callbacks are mandatory.
  */
@@ -59,6 +63,7 @@ struct pci_platform_pm_ops {
        pci_power_t (*choose_state)(struct pci_dev *dev);
        int (*sleep_wake)(struct pci_dev *dev, bool enable);
        int (*run_wake)(struct pci_dev *dev, bool enable);
+       bool (*need_resume)(struct pci_dev *dev);
 };
 
 int pci_set_platform_pm(struct pci_platform_pm_ops *ops);
@@ -67,6 +72,7 @@ void pci_power_up(struct pci_dev *dev);
 void pci_disable_enabled_device(struct pci_dev *dev);
 int pci_finish_runtime_suspend(struct pci_dev *dev);
 int __pci_pme_wakeup(struct pci_dev *dev, void *ign);
+bool pci_dev_keep_suspended(struct pci_dev *dev);
 void pci_config_pm_runtime_get(struct pci_dev *dev);
 void pci_config_pm_runtime_put(struct pci_dev *dev);
 void pci_pm_init(struct pci_dev *dev);
index 23212f8ae09b5e1d853d6ad11410731cc1a17505..8d2f400e96cb848260fb3882a385dae0259c7ee4 100644 (file)
@@ -1895,7 +1895,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
        int error;
        struct pci_host_bridge *bridge;
        struct pci_bus *b, *b2;
-       struct pci_host_bridge_window *window, *n;
+       struct resource_entry *window, *n;
        struct resource *res;
        resource_size_t offset;
        char bus_addr[64];
@@ -1959,8 +1959,8 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
                printk(KERN_INFO "PCI host bridge to bus %s\n", dev_name(&b->dev));
 
        /* Add initial resources to the bus */
-       list_for_each_entry_safe(window, n, resources, list) {
-               list_move_tail(&window->list, &bridge->windows);
+       resource_list_for_each_entry_safe(window, n, resources) {
+               list_move_tail(&window->node, &bridge->windows);
                res = window->res;
                offset = window->offset;
                if (res->flags & IORESOURCE_BUS)
@@ -2060,12 +2060,12 @@ void pci_bus_release_busn_res(struct pci_bus *b)
 struct pci_bus *pci_scan_root_bus(struct device *parent, int bus,
                struct pci_ops *ops, void *sysdata, struct list_head *resources)
 {
-       struct pci_host_bridge_window *window;
+       struct resource_entry *window;
        bool found = false;
        struct pci_bus *b;
        int max;
 
-       list_for_each_entry(window, resources, list)
+       resource_list_for_each_entry(window, resources)
                if (window->res->flags & IORESOURCE_BUS) {
                        found = true;
                        break;
index 66977ebf13b30cba09b6bb1b7c06eeb07e2e5bd0..ff0356fb378ff514984fac72383f78a01ed93487 100644 (file)
@@ -180,20 +180,21 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res,
        struct pnp_dev *dev = data;
        struct acpi_resource_dma *dma;
        struct acpi_resource_vendor_typed *vendor_typed;
-       struct resource r = {0};
+       struct resource_win win = {{0}, 0};
+       struct resource *r = &win.res;
        int i, flags;
 
-       if (acpi_dev_resource_address_space(res, &r)
-           || acpi_dev_resource_ext_address_space(res, &r)) {
-               pnp_add_resource(dev, &r);
+       if (acpi_dev_resource_address_space(res, &win)
+           || acpi_dev_resource_ext_address_space(res, &win)) {
+               pnp_add_resource(dev, &win.res);
                return AE_OK;
        }
 
-       r.flags = 0;
-       if (acpi_dev_resource_interrupt(res, 0, &r)) {
-               pnpacpi_add_irqresource(dev, &r);
-               for (i = 1; acpi_dev_resource_interrupt(res, i, &r); i++)
-                       pnpacpi_add_irqresource(dev, &r);
+       r->flags = 0;
+       if (acpi_dev_resource_interrupt(res, 0, r)) {
+               pnpacpi_add_irqresource(dev, r);
+               for (i = 1; acpi_dev_resource_interrupt(res, i, r); i++)
+                       pnpacpi_add_irqresource(dev, r);
 
                if (i > 1) {
                        /*
@@ -209,7 +210,7 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res,
                        }
                }
                return AE_OK;
-       } else if (r.flags & IORESOURCE_DISABLED) {
+       } else if (r->flags & IORESOURCE_DISABLED) {
                pnp_add_irq_resource(dev, 0, IORESOURCE_DISABLED);
                return AE_OK;
        }
@@ -218,13 +219,13 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res,
        case ACPI_RESOURCE_TYPE_MEMORY24:
        case ACPI_RESOURCE_TYPE_MEMORY32:
        case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
-               if (acpi_dev_resource_memory(res, &r))
-                       pnp_add_resource(dev, &r);
+               if (acpi_dev_resource_memory(res, r))
+                       pnp_add_resource(dev, r);
                break;
        case ACPI_RESOURCE_TYPE_IO:
        case ACPI_RESOURCE_TYPE_FIXED_IO:
-               if (acpi_dev_resource_io(res, &r))
-                       pnp_add_resource(dev, &r);
+               if (acpi_dev_resource_io(res, r))
+                       pnp_add_resource(dev, r);
                break;
        case ACPI_RESOURCE_TYPE_DMA:
                dma = &res->data.dma;
@@ -410,12 +411,12 @@ static __init void pnpacpi_parse_address_option(struct pnp_dev *dev,
        if (p->resource_type == ACPI_MEMORY_RANGE) {
                if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY)
                        flags = IORESOURCE_MEM_WRITEABLE;
-               pnp_register_mem_resource(dev, option_flags, p->minimum,
-                                         p->minimum, 0, p->address_length,
+               pnp_register_mem_resource(dev, option_flags, p->address.minimum,
+                                         p->address.minimum, 0, p->address.address_length,
                                          flags);
        } else if (p->resource_type == ACPI_IO_RANGE)
-               pnp_register_port_resource(dev, option_flags, p->minimum,
-                                          p->minimum, 0, p->address_length,
+               pnp_register_port_resource(dev, option_flags, p->address.minimum,
+                                          p->address.minimum, 0, p->address.address_length,
                                           IORESOURCE_IO_FIXED);
 }
 
@@ -429,12 +430,12 @@ static __init void pnpacpi_parse_ext_address_option(struct pnp_dev *dev,
        if (p->resource_type == ACPI_MEMORY_RANGE) {
                if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY)
                        flags = IORESOURCE_MEM_WRITEABLE;
-               pnp_register_mem_resource(dev, option_flags, p->minimum,
-                                         p->minimum, 0, p->address_length,
+               pnp_register_mem_resource(dev, option_flags, p->address.minimum,
+                                         p->address.minimum, 0, p->address.address_length,
                                          flags);
        } else if (p->resource_type == ACPI_IO_RANGE)
-               pnp_register_port_resource(dev, option_flags, p->minimum,
-                                          p->minimum, 0, p->address_length,
+               pnp_register_port_resource(dev, option_flags, p->address.minimum,
+                                          p->address.minimum, 0, p->address.address_length,
                                           IORESOURCE_IO_FIXED);
 }
 
index 1e824fb1649bfc9a2d5b218f457c95796bc0447c..296db7a69c27548c2529c52d532832c39f0db1aa 100644 (file)
@@ -161,7 +161,7 @@ static int sfi_verify_table(struct sfi_table_header *table)
  * Check for common case that we can re-use mapping to SYST,
  * which requires syst_pa, syst_va to be initialized.
  */
-struct sfi_table_header *sfi_map_table(u64 pa)
+static struct sfi_table_header *sfi_map_table(u64 pa)
 {
        struct sfi_table_header *th;
        u32 length;
@@ -189,7 +189,7 @@ struct sfi_table_header *sfi_map_table(u64 pa)
  * Undoes effect of sfi_map_table() by unmapping table
  * if it did not completely fit on same page as SYST.
  */
-void sfi_unmap_table(struct sfi_table_header *th)
+static void sfi_unmap_table(struct sfi_table_header *th)
 {
        if (!TABLE_ON_PAGE(syst_va, th, th->len))
                sfi_unmap_memory(th, TABLE_ON_PAGE(th, th, th->len) ?
index aeb50bb6ba9ca17e345f3e7e737560d03b88abbe..eaffb0248de1a6791c18bd3d43410d84015ad889 100644 (file)
@@ -3452,8 +3452,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
        return status;
 }
 
-#ifdef CONFIG_PM
-
 int usb_remote_wakeup(struct usb_device *udev)
 {
        int     status = 0;
@@ -3512,16 +3510,6 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port,
        return connect_change;
 }
 
-#else
-
-static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port,
-               u16 portstatus, u16 portchange)
-{
-       return 0;
-}
-
-#endif
-
 static int check_ports_changed(struct usb_hub *hub)
 {
        int port1;
index 34e40b733f9a8f27cdfb142f8ce67f081188d63b..4fc886cd5586728629058b6e3b8d7fd611c78b94 100644 (file)
@@ -117,8 +117,8 @@ acpi_memory_get_resource(struct acpi_resource *resource, void *context)
        list_for_each_entry(info, &mem_device->res_list, list) {
                if ((info->caching == address64.info.mem.caching) &&
                    (info->write_protect == address64.info.mem.write_protect) &&
-                   (info->start_addr + info->length == address64.minimum)) {
-                       info->length += address64.address_length;
+                   (info->start_addr + info->length == address64.address.minimum)) {
+                       info->length += address64.address.address_length;
                        return AE_OK;
                }
        }
@@ -130,8 +130,8 @@ acpi_memory_get_resource(struct acpi_resource *resource, void *context)
        INIT_LIST_HEAD(&new->list);
        new->caching = address64.info.mem.caching;
        new->write_protect = address64.info.mem.write_protect;
-       new->start_addr = address64.minimum;
-       new->length = address64.address_length;
+       new->start_addr = address64.address.minimum;
+       new->length = address64.address.address_length;
        list_add_tail(&new->list, &mem_device->res_list);
 
        return AE_OK;
index d5ec6c87810f1045c563ace401d2599de789cd1e..6b040f4ddfabc3131a9290aaf10319928d12bbbd 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 5a0a3e5daf85de90c942ba59950023b5ffa2e49e..03aacfb3e98b88ed7f2eaaa28c6fb5eeb3556c4f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 8b06e4c1dd5d1873b08c3490b8d80a40283b4e87..11c3a011dcbf1ba5deb7acebfb55880d51a146b9 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7461327e14e43ade9065e56b9dab955a33970f3c..273de709495c1db522d46dad7f339677fc8df756 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1baae6edda8925d5205f3151c72c19cb31391e43..9318a87ee39ab9adeb4e108df1bc686e722a5bd5 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a08e55a263c97b04ee5a60b2a910d5425a6ac370..b0bb30ebb807987754ad8ca64c3f3706645a51d2 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 03b3e6d405ffdb7b538cab4d967040fedde4fbef..0bc78df66d4b10764c464986dd9262f69844f94d 100644 (file)
@@ -7,7 +7,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 5ba78464c1b1689f58fb569a39b0794b30d8a9fc..d56f5d7221389d5b315ba5b5ef278d1ff0b532b1 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -46,7 +46,7 @@
 
 /* Current ACPICA subsystem version in YYYYMMDD format */
 
-#define ACPI_CA_VERSION                 0x20141107
+#define ACPI_CA_VERSION                 0x20150204
 
 #include <acpi/acconfig.h>
 #include <acpi/actypes.h>
@@ -568,6 +568,14 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
                                                          acpi_gpe_handler
                                                          address,
                                                          void *context))
+ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+                                acpi_install_gpe_raw_handler(acpi_handle
+                                                             gpe_device,
+                                                             u32 gpe_number,
+                                                             u32 type,
+                                                             acpi_gpe_handler
+                                                             address,
+                                                             void *context))
 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
                                 acpi_remove_gpe_handler(acpi_handle gpe_device,
                                                         u32 gpe_number,
@@ -890,12 +898,6 @@ ACPI_APP_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(1)
  */
 ACPI_GLOBAL(u8, acpi_gbl_permanent_mmap);
 
-ACPI_EXTERNAL_RETURN_STATUS(acpi_status
-                           acpi_get_id(acpi_handle object,
-                                       acpi_owner_id * out_type))
-
-ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_unload_table_id(acpi_owner_id id))
-
 ACPI_EXTERNAL_RETURN_STATUS(acpi_status
                            acpi_get_table_with_size(acpi_string signature,
                                                     u32 instance,
index eb760ca0b2e05884745e02e09aec222744829c25..ebe242638591dddb31ab28e032f8e80da312b0db 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -305,43 +305,51 @@ struct acpi_resource_source {
        u8                                      max_address_fixed; \
        union acpi_resource_attribute           info;
 
-struct acpi_resource_address {
-ACPI_RESOURCE_ADDRESS_COMMON};
-
-struct acpi_resource_address16 {
-       ACPI_RESOURCE_ADDRESS_COMMON u16 granularity;
+struct acpi_address16_attribute {
+       u16 granularity;
        u16 minimum;
        u16 maximum;
        u16 translation_offset;
        u16 address_length;
-       struct acpi_resource_source resource_source;
 };
 
-struct acpi_resource_address32 {
-       ACPI_RESOURCE_ADDRESS_COMMON u32 granularity;
+struct acpi_address32_attribute {
+       u32 granularity;
        u32 minimum;
        u32 maximum;
        u32 translation_offset;
        u32 address_length;
-       struct acpi_resource_source resource_source;
 };
 
-struct acpi_resource_address64 {
-       ACPI_RESOURCE_ADDRESS_COMMON u64 granularity;
+struct acpi_address64_attribute {
+       u64 granularity;
        u64 minimum;
        u64 maximum;
        u64 translation_offset;
        u64 address_length;
+};
+
+struct acpi_resource_address {
+ACPI_RESOURCE_ADDRESS_COMMON};
+
+struct acpi_resource_address16 {
+       ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address16_attribute address;
+       struct acpi_resource_source resource_source;
+};
+
+struct acpi_resource_address32 {
+       ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address32_attribute address;
+       struct acpi_resource_source resource_source;
+};
+
+struct acpi_resource_address64 {
+       ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address64_attribute address;
        struct acpi_resource_source resource_source;
 };
 
 struct acpi_resource_extended_address64 {
        ACPI_RESOURCE_ADDRESS_COMMON u8 revision_ID;
-       u64 granularity;
-       u64 minimum;
-       u64 maximum;
-       u64 translation_offset;
-       u64 address_length;
+       struct acpi_address64_attribute address;
        u64 type_specific;
 };
 
index bee19d8170c5d7c44c96a6d55082ffaf843aee51..d4081fef1095bded3f607c04ed26d514efefa208 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 29e79370641da878dbf0875ad87950f6a0f4c83f..b80b0e6dabc568700346a1bf941467a9eed438df 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index ecff62405f17162f09901d53dfdf113452808dc8..f06d75e5fa547bd08e91c715b98f8e3598a6b96c 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 5480cb2236bf33356cb3b0c8811de2de280897af..440ca8104b437c0b8c8a2eee31eea1afa6a7b2b3 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index bbef17368e49fb65c0870f54ab59e388a76e4e62..b034f1068dfe40f6c0496f388dad310f491dcbe8 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -744,7 +744,7 @@ typedef u32 acpi_event_status;
 /*
  * GPE info flags - Per GPE
  * +-------+-+-+---+
- * |  7:4  |3|2|1:0|
+ * |  7:5  |4|3|2:0|
  * +-------+-+-+---+
  *     |    | |  |
  *     |    | |  +-- Type of dispatch:to method, handler, notify, or none
@@ -756,13 +756,15 @@ typedef u32 acpi_event_status;
 #define ACPI_GPE_DISPATCH_METHOD        (u8) 0x01
 #define ACPI_GPE_DISPATCH_HANDLER       (u8) 0x02
 #define ACPI_GPE_DISPATCH_NOTIFY        (u8) 0x03
-#define ACPI_GPE_DISPATCH_MASK          (u8) 0x03
+#define ACPI_GPE_DISPATCH_RAW_HANDLER   (u8) 0x04
+#define ACPI_GPE_DISPATCH_MASK          (u8) 0x07
+#define ACPI_GPE_DISPATCH_TYPE(flags)   ((u8) ((flags) & ACPI_GPE_DISPATCH_MASK))
 
-#define ACPI_GPE_LEVEL_TRIGGERED        (u8) 0x04
+#define ACPI_GPE_LEVEL_TRIGGERED        (u8) 0x08
 #define ACPI_GPE_EDGE_TRIGGERED         (u8) 0x00
-#define ACPI_GPE_XRUPT_TYPE_MASK        (u8) 0x04
+#define ACPI_GPE_XRUPT_TYPE_MASK        (u8) 0x08
 
-#define ACPI_GPE_CAN_WAKE               (u8) 0x08
+#define ACPI_GPE_CAN_WAKE               (u8) 0x10
 
 /*
  * Flags for GPE and Lock interfaces
index 5f8cc1fa3278b071c2f733e8f62b44cd10fac2bb..ad74dc51d5b771760c655e4ee96a1e115c5a62ed 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2b612384c9942fea18d6efd1a76ad20f6bb05268..71e5ec5b07a3436b872326a4bafece104e2b7343 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 384875da37137be7f24c9c1474d7cbe60dc0e318..f54de0a635582d45b3aca6bb1977313d431e0549 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 1ba7c190c2ccbfc22ca9f576a0f7faf9ef96d979..74ba46c8157a6f13098d27bd2338b8c8bf7db93f 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 568d4b886712ddd8aae8ba244878d1ca38e30b65..acedc3f026de7bd6e3a8a8fce834000e97efdcdc 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d459cd17b477600cadf54ad2a227b66c5638112f..24c7aa8b1d20c6fce9dd7ede9d051d04afc7c07d 100644 (file)
@@ -27,6 +27,7 @@
 
 #include <linux/errno.h>
 #include <linux/ioport.h>      /* for struct resource */
+#include <linux/resource_ext.h>
 #include <linux/device.h>
 #include <linux/property.h>
 
@@ -151,6 +152,10 @@ int acpi_map_cpu(acpi_handle handle, int physid, int *pcpu);
 int acpi_unmap_cpu(int cpu);
 #endif /* CONFIG_ACPI_HOTPLUG_CPU */
 
+#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
+int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr);
+#endif
+
 int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base);
 int acpi_unregister_ioapic(acpi_handle handle, u32 gsi_base);
 int acpi_ioapic_registered(acpi_handle handle, u32 gsi_base);
@@ -288,22 +293,25 @@ extern int pnpacpi_disabled;
 bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res);
 bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res);
 bool acpi_dev_resource_address_space(struct acpi_resource *ares,
-                                    struct resource *res);
+                                    struct resource_win *win);
 bool acpi_dev_resource_ext_address_space(struct acpi_resource *ares,
-                                        struct resource *res);
+                                        struct resource_win *win);
 unsigned long acpi_dev_irq_flags(u8 triggering, u8 polarity, u8 shareable);
 bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
                                 struct resource *res);
 
-struct resource_list_entry {
-       struct list_head node;
-       struct resource res;
-};
-
 void acpi_dev_free_resource_list(struct list_head *list);
 int acpi_dev_get_resources(struct acpi_device *adev, struct list_head *list,
                           int (*preproc)(struct acpi_resource *, void *),
                           void *preproc_data);
+int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+                                 unsigned long types);
+
+static inline int acpi_dev_filter_resource_type_cb(struct acpi_resource *ares,
+                                                  void *arg)
+{
+       return acpi_dev_filter_resource_type(ares, (unsigned long)arg);
+}
 
 int acpi_check_resource_conflict(const struct resource *res);
 
index 4d078cebafd2dd5f059c65bd75a683a81b80e15f..2ee4888c1f47f6cbdcc0c7d49e48cc74f7729e05 100644 (file)
@@ -66,8 +66,6 @@ struct cpufreq_policy {
        unsigned int            shared_type; /* ACPI: ANY or ALL affected CPUs
                                                should set cpufreq */
        unsigned int            cpu;    /* cpu nr of CPU managing this policy */
-       unsigned int            last_cpu; /* cpu nr of previous CPU that managed
-                                          * this policy */
        struct clk              *clk;
        struct cpufreq_cpuinfo  cpuinfo;/* see above */
 
@@ -113,6 +111,9 @@ struct cpufreq_policy {
        wait_queue_head_t       transition_wait;
        struct task_struct      *transition_task; /* Task which is doing the transition */
 
+       /* cpufreq-stats */
+       struct cpufreq_stats    *stats;
+
        /* For cpufreq driver's internal use */
        void                    *driver_data;
 };
@@ -367,9 +368,8 @@ static inline void cpufreq_resume(void) {}
 #define CPUFREQ_INCOMPATIBLE           (1)
 #define CPUFREQ_NOTIFY                 (2)
 #define CPUFREQ_START                  (3)
-#define CPUFREQ_UPDATE_POLICY_CPU      (4)
-#define CPUFREQ_CREATE_POLICY          (5)
-#define CPUFREQ_REMOVE_POLICY          (6)
+#define CPUFREQ_CREATE_POLICY          (4)
+#define CPUFREQ_REMOVE_POLICY          (5)
 
 #ifdef CONFIG_CPU_FREQ
 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);
diff --git a/include/linux/devfreq-event.h b/include/linux/devfreq-event.h
new file mode 100644 (file)
index 0000000..602fbbf
--- /dev/null
@@ -0,0 +1,196 @@
+/*
+ * devfreq-event: a framework to provide raw data and events of devfreq devices
+ *
+ * Copyright (C) 2014 Samsung Electronics
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __LINUX_DEVFREQ_EVENT_H__
+#define __LINUX_DEVFREQ_EVENT_H__
+
+#include <linux/device.h>
+
+/**
+ * struct devfreq_event_dev - the devfreq-event device
+ *
+ * @node       : Contain the devfreq-event device that have been registered.
+ * @dev                : the device registered by devfreq-event class. dev.parent is
+ *               the device using devfreq-event.
+ * @lock       : a mutex to protect accessing devfreq-event.
+ * @enable_count: the number of enable function have been called.
+ * @desc       : the description for devfreq-event device.
+ *
+ * This structure contains devfreq-event device information.
+ */
+struct devfreq_event_dev {
+       struct list_head node;
+
+       struct device dev;
+       struct mutex lock;
+       u32 enable_count;
+
+       const struct devfreq_event_desc *desc;
+};
+
+/**
+ * struct devfreq_event_data - the devfreq-event data
+ *
+ * @load_count : load count of devfreq-event device for the given period.
+ * @total_count        : total count of devfreq-event device for the given period.
+ *               each count may represent a clock cycle, a time unit
+ *               (ns/us/...), or anything the device driver wants.
+ *               Generally, utilization is load_count / total_count.
+ *
+ * This structure contains the data of devfreq-event device for polling period.
+ */
+struct devfreq_event_data {
+       unsigned long load_count;
+       unsigned long total_count;
+};
+
+/**
+ * struct devfreq_event_ops - the operations of devfreq-event device
+ *
+ * @enable     : Enable the devfreq-event device.
+ * @disable    : Disable the devfreq-event device.
+ * @reset      : Reset all setting of the devfreq-event device.
+ * @set_event  : Set the specific event type for the devfreq-event device.
+ * @get_event  : Get the result of the devfreq-event devie with specific
+ *               event type.
+ *
+ * This structure contains devfreq-event device operations which can be
+ * implemented by devfreq-event device drivers.
+ */
+struct devfreq_event_ops {
+       /* Optional functions */
+       int (*enable)(struct devfreq_event_dev *edev);
+       int (*disable)(struct devfreq_event_dev *edev);
+       int (*reset)(struct devfreq_event_dev *edev);
+
+       /* Mandatory functions */
+       int (*set_event)(struct devfreq_event_dev *edev);
+       int (*get_event)(struct devfreq_event_dev *edev,
+                        struct devfreq_event_data *edata);
+};
+
+/**
+ * struct devfreq_event_desc - the descriptor of devfreq-event device
+ *
+ * @name       : the name of devfreq-event device.
+ * @driver_data        : the private data for devfreq-event driver.
+ * @ops                : the operation to control devfreq-event device.
+ *
+ * Each devfreq-event device is described with a this structure.
+ * This structure contains the various data for devfreq-event device.
+ */
+struct devfreq_event_desc {
+       const char *name;
+       void *driver_data;
+
+       struct devfreq_event_ops *ops;
+};
+
+#if defined(CONFIG_PM_DEVFREQ_EVENT)
+extern int devfreq_event_enable_edev(struct devfreq_event_dev *edev);
+extern int devfreq_event_disable_edev(struct devfreq_event_dev *edev);
+extern bool devfreq_event_is_enabled(struct devfreq_event_dev *edev);
+extern int devfreq_event_set_event(struct devfreq_event_dev *edev);
+extern int devfreq_event_get_event(struct devfreq_event_dev *edev,
+                               struct devfreq_event_data *edata);
+extern int devfreq_event_reset_event(struct devfreq_event_dev *edev);
+extern struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(
+                               struct device *dev, int index);
+extern int devfreq_event_get_edev_count(struct device *dev);
+extern struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
+                               struct devfreq_event_desc *desc);
+extern int devfreq_event_remove_edev(struct devfreq_event_dev *edev);
+extern struct devfreq_event_dev *devm_devfreq_event_add_edev(struct device *dev,
+                               struct devfreq_event_desc *desc);
+extern void devm_devfreq_event_remove_edev(struct device *dev,
+                               struct devfreq_event_dev *edev);
+static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev)
+{
+       return edev->desc->driver_data;
+}
+#else
+static inline int devfreq_event_enable_edev(struct devfreq_event_dev *edev)
+{
+       return -EINVAL;
+}
+
+static inline int devfreq_event_disable_edev(struct devfreq_event_dev *edev)
+{
+       return -EINVAL;
+}
+
+static inline bool devfreq_event_is_enabled(struct devfreq_event_dev *edev)
+{
+       return false;
+}
+
+static inline int devfreq_event_set_event(struct devfreq_event_dev *edev)
+{
+       return -EINVAL;
+}
+
+static inline int devfreq_event_get_event(struct devfreq_event_dev *edev,
+                                       struct devfreq_event_data *edata)
+{
+       return -EINVAL;
+}
+
+static inline int devfreq_event_reset_event(struct devfreq_event_dev *edev)
+{
+       return -EINVAL;
+}
+
+static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev)
+{
+       return ERR_PTR(-EINVAL);
+}
+
+static inline struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(
+                                       struct device *dev, int index)
+{
+       return ERR_PTR(-EINVAL);
+}
+
+static inline int devfreq_event_get_edev_count(struct device *dev)
+{
+       return -EINVAL;
+}
+
+static inline struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
+                                       struct devfreq_event_desc *desc)
+{
+       return ERR_PTR(-EINVAL);
+}
+
+static inline int devfreq_event_remove_edev(struct devfreq_event_dev *edev)
+{
+       return -EINVAL;
+}
+
+static inline struct devfreq_event_dev *devm_devfreq_event_add_edev(
+                                       struct device *dev,
+                                       struct devfreq_event_desc *desc)
+{
+       return ERR_PTR(-EINVAL);
+}
+
+static inline void devm_devfreq_event_remove_edev(struct device *dev,
+                                       struct devfreq_event_dev *edev)
+{
+}
+
+static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev)
+{
+       return NULL;
+}
+#endif /* CONFIG_PM_DEVFREQ_EVENT */
+
+#endif /* __LINUX_DEVFREQ_EVENT_H__ */
index 421eb6a9e6007c323355f40d40e3011cf5cb6f98..211e9da8a7d79df19261f2887f6d972ef11733b6 100644 (file)
@@ -29,6 +29,7 @@
 #include <linux/atomic.h>
 #include <linux/device.h>
 #include <linux/io.h>
+#include <linux/resource_ext.h>
 #include <uapi/linux/pci.h>
 
 #include <linux/pci_ids.h>
@@ -399,16 +400,10 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
        return (pdev->error_state != pci_channel_io_normal);
 }
 
-struct pci_host_bridge_window {
-       struct list_head list;
-       struct resource *res;           /* host bridge aperture (CPU address) */
-       resource_size_t offset;         /* bus address + offset = CPU address */
-};
-
 struct pci_host_bridge {
        struct device dev;
        struct pci_bus *bus;            /* root bus */
-       struct list_head windows;       /* pci_host_bridge_windows */
+       struct list_head windows;       /* resource_entry */
        void (*release_fn)(struct pci_host_bridge *);
        void *release_data;
 };
index 8b5976364619a5ad113a4a4c8bc51155a526ae38..e2f1be6dd9dd386fb1c3e6e8c62580454ff85670 100644 (file)
@@ -597,7 +597,7 @@ struct dev_pm_info {
 
 extern void update_pm_runtime_accounting(struct device *dev);
 extern int dev_pm_get_subsys_data(struct device *dev);
-extern int dev_pm_put_subsys_data(struct device *dev);
+extern void dev_pm_put_subsys_data(struct device *dev);
 
 /*
  * Power domains provide callbacks that are executed during system suspend,
index a9edab2c787a53e809150034128bc46448a010ad..080e778118ba85dfab3a1bd99dd0967917837312 100644 (file)
@@ -113,8 +113,6 @@ struct generic_pm_domain_data {
        struct pm_domain_data base;
        struct gpd_timing_data td;
        struct notifier_block nb;
-       struct mutex lock;
-       unsigned int refcount;
        int need_restore;
 };
 
@@ -140,7 +138,6 @@ extern int __pm_genpd_name_add_device(const char *domain_name,
 
 extern int pm_genpd_remove_device(struct generic_pm_domain *genpd,
                                  struct device *dev);
-extern void pm_genpd_dev_need_restore(struct device *dev, bool val);
 extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
                                  struct generic_pm_domain *new_subdomain);
 extern int pm_genpd_add_subdomain_names(const char *master_name,
@@ -187,7 +184,6 @@ static inline int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 {
        return -ENOSYS;
 }
-static inline void pm_genpd_dev_need_restore(struct device *dev, bool val) {}
 static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
                                         struct generic_pm_domain *new_sd)
 {
diff --git a/include/linux/resource_ext.h b/include/linux/resource_ext.h
new file mode 100644 (file)
index 0000000..e2bf63d
--- /dev/null
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2015, Intel Corporation
+ * Author: Jiang Liu <jiang.liu@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef _LINUX_RESOURCE_EXT_H
+#define _LINUX_RESOURCE_EXT_H
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+
+/* Represent resource window for bridge devices */
+struct resource_win {
+       struct resource res;            /* In master (CPU) address space */
+       resource_size_t offset;         /* Translation offset for bridge */
+};
+
+/*
+ * Common resource list management data structure and interfaces to support
+ * ACPI, PNP and PCI host bridge etc.
+ */
+struct resource_entry {
+       struct list_head        node;
+       struct resource         *res;   /* In master (CPU) address space */
+       resource_size_t         offset; /* Translation offset for bridge */
+       struct resource         __res;  /* Default storage for res */
+};
+
+extern struct resource_entry *
+resource_list_create_entry(struct resource *res, size_t extra_size);
+extern void resource_list_free(struct list_head *head);
+
+static inline void resource_list_add(struct resource_entry *entry,
+                                    struct list_head *head)
+{
+       list_add(&entry->node, head);
+}
+
+static inline void resource_list_add_tail(struct resource_entry *entry,
+                                         struct list_head *head)
+{
+       list_add_tail(&entry->node, head);
+}
+
+static inline void resource_list_del(struct resource_entry *entry)
+{
+       list_del(&entry->node);
+}
+
+static inline void resource_list_free_entry(struct resource_entry *entry)
+{
+       kfree(entry);
+}
+
+static inline void
+resource_list_destroy_entry(struct resource_entry *entry)
+{
+       resource_list_del(entry);
+       resource_list_free_entry(entry);
+}
+
+#define resource_list_for_each_entry(entry, list)      \
+       list_for_each_entry((entry), (list), node)
+
+#define resource_list_for_each_entry_safe(entry, tmp, list)    \
+       list_for_each_entry_safe((entry), (tmp), (list), node)
+
+#endif /* _LINUX_RESOURCE_EXT_H */
index 5f4c006c4b1ea737497e4647be5e3c355cf2bd3a..97b0df71303ef7bbecf28d1479c3e198885b6199 100644 (file)
@@ -41,6 +41,8 @@
 #include <linux/platform_device.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
 
 #include <linux/uaccess.h>
 #include <linux/export.h>
@@ -182,6 +184,81 @@ static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
        c->target_value = value;
 }
 
+static inline int pm_qos_get_value(struct pm_qos_constraints *c);
+static int pm_qos_dbg_show_requests(struct seq_file *s, void *unused)
+{
+       struct pm_qos_object *qos = (struct pm_qos_object *)s->private;
+       struct pm_qos_constraints *c;
+       struct pm_qos_request *req;
+       char *type;
+       unsigned long flags;
+       int tot_reqs = 0;
+       int active_reqs = 0;
+
+       if (IS_ERR_OR_NULL(qos)) {
+               pr_err("%s: bad qos param!\n", __func__);
+               return -EINVAL;
+       }
+       c = qos->constraints;
+       if (IS_ERR_OR_NULL(c)) {
+               pr_err("%s: Bad constraints on qos?\n", __func__);
+               return -EINVAL;
+       }
+
+       /* Lock to ensure we have a snapshot */
+       spin_lock_irqsave(&pm_qos_lock, flags);
+       if (plist_head_empty(&c->list)) {
+               seq_puts(s, "Empty!\n");
+               goto out;
+       }
+
+       switch (c->type) {
+       case PM_QOS_MIN:
+               type = "Minimum";
+               break;
+       case PM_QOS_MAX:
+               type = "Maximum";
+               break;
+       case PM_QOS_SUM:
+               type = "Sum";
+               break;
+       default:
+               type = "Unknown";
+       }
+
+       plist_for_each_entry(req, &c->list, node) {
+               char *state = "Default";
+
+               if ((req->node).prio != c->default_value) {
+                       active_reqs++;
+                       state = "Active";
+               }
+               tot_reqs++;
+               seq_printf(s, "%d: %d: %s\n", tot_reqs,
+                          (req->node).prio, state);
+       }
+
+       seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n",
+                  type, pm_qos_get_value(c), active_reqs, tot_reqs);
+
+out:
+       spin_unlock_irqrestore(&pm_qos_lock, flags);
+       return 0;
+}
+
+static int pm_qos_dbg_open(struct inode *inode, struct file *file)
+{
+       return single_open(file, pm_qos_dbg_show_requests,
+                          inode->i_private);
+}
+
+static const struct file_operations pm_qos_debug_fops = {
+       .open           = pm_qos_dbg_open,
+       .read           = seq_read,
+       .llseek         = seq_lseek,
+       .release        = single_release,
+};
+
 /**
  * pm_qos_update_target - manages the constraints list and calls the notifiers
  *  if needed
@@ -509,12 +586,17 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
 EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
 
 /* User space interface to PM QoS classes via misc devices */
-static int register_pm_qos_misc(struct pm_qos_object *qos)
+static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
 {
        qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
        qos->pm_qos_power_miscdev.name = qos->name;
        qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
 
+       if (d) {
+               (void)debugfs_create_file(qos->name, S_IRUGO, d,
+                                         (void *)qos, &pm_qos_debug_fops);
+       }
+
        return misc_register(&qos->pm_qos_power_miscdev);
 }
 
@@ -608,11 +690,16 @@ static int __init pm_qos_power_init(void)
 {
        int ret = 0;
        int i;
+       struct dentry *d;
 
        BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
 
+       d = debugfs_create_dir("pm_qos", NULL);
+       if (IS_ERR_OR_NULL(d))
+               d = NULL;
+
        for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
-               ret = register_pm_qos_misc(pm_qos_array[i]);
+               ret = register_pm_qos_misc(pm_qos_array[i], d);
                if (ret < 0) {
                        printk(KERN_ERR "pm_qos_param: %s setup failed\n",
                               pm_qos_array[i]->name);
index 0c40c16174b4d8eb638a06efe159446ded6f2e71..c24d5a23bf939be71f75736efdbd3874da157c2d 100644 (file)
@@ -1472,9 +1472,9 @@ static inline unsigned long preallocate_highmem_fraction(unsigned long nr_pages,
 /**
  * free_unnecessary_pages - Release preallocated pages not needed for the image
  */
-static void free_unnecessary_pages(void)
+static unsigned long free_unnecessary_pages(void)
 {
-       unsigned long save, to_free_normal, to_free_highmem;
+       unsigned long save, to_free_normal, to_free_highmem, free;
 
        save = count_data_pages();
        if (alloc_normal >= save) {
@@ -1495,6 +1495,7 @@ static void free_unnecessary_pages(void)
                else
                        to_free_normal = 0;
        }
+       free = to_free_normal + to_free_highmem;
 
        memory_bm_position_reset(&copy_bm);
 
@@ -1518,6 +1519,8 @@ static void free_unnecessary_pages(void)
                swsusp_unset_page_free(page);
                __free_page(page);
        }
+
+       return free;
 }
 
 /**
@@ -1707,7 +1710,7 @@ int hibernate_preallocate_memory(void)
         * pages in memory, but we have allocated more.  Release the excessive
         * ones now.
         */
-       free_unnecessary_pages();
+       pages -= free_unnecessary_pages();
 
  out:
        stop = ktime_get();
@@ -2310,8 +2313,6 @@ static inline void free_highmem_data(void)
                free_image_page(buffer, PG_UNSAFE_CLEAR);
 }
 #else
-static inline int get_safe_write_buffer(void) { return 0; }
-
 static unsigned int
 count_highmem_image_pages(struct memory_bitmap *bm) { return 0; }
 
index 0bcebffc4e77d5f45571e38ddc5dcc38f40594f4..19f2357dfda3fe88f7e0ddc6b29778176138a619 100644 (file)
@@ -22,6 +22,7 @@
 #include <linux/device.h>
 #include <linux/pfn.h>
 #include <linux/mm.h>
+#include <linux/resource_ext.h>
 #include <asm/io.h>
 
 
@@ -1529,6 +1530,30 @@ int iomem_is_exclusive(u64 addr)
        return err;
 }
 
+struct resource_entry *resource_list_create_entry(struct resource *res,
+                                                 size_t extra_size)
+{
+       struct resource_entry *entry;
+
+       entry = kzalloc(sizeof(*entry) + extra_size, GFP_KERNEL);
+       if (entry) {
+               INIT_LIST_HEAD(&entry->node);
+               entry->res = res ? res : &entry->__res;
+       }
+
+       return entry;
+}
+EXPORT_SYMBOL(resource_list_create_entry);
+
+void resource_list_free(struct list_head *head)
+{
+       struct resource_entry *entry, *tmp;
+
+       list_for_each_entry_safe(entry, tmp, head, node)
+               resource_list_destroy_entry(entry);
+}
+EXPORT_SYMBOL(resource_list_free);
+
 static int __init strict_iomem(char *str)
 {
        if (strstr(str, "relaxed"))
index 1c71382b283d5abe23fedb1923952aa02ee6c73a..eb4220a132ecd1be9002cf35edf82870969d6c82 100644 (file)
@@ -13,5 +13,6 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/power.h>
 
+EXPORT_TRACEPOINT_SYMBOL_GPL(suspend_resume);
 EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_idle);
 
index f4b953354ff7e28fa11e86b9b260754ad78c7e99..eec68804150076e122a0620b325647b4561ba8f4 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2f0f34a36db4440660c51366fa4f2bcf2aa6343d..5da129e10aa2dcd049647d10d025c26e08b7cf65 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index c13ff9c51d74d8db2cfafc047f0329619664e911..b51e40a9a12077d911da20246dea9ac65602d7f0 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 0dc2485dedf5e5f68c2c409ef7ee5fa6facc8d48..92f1fd700344521a40020ce5ea1ee2908bdf0fe6 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 733f9e490fc4345913f5d2beb38f2c1e0c5a2179..e153fcb12b1a072df72b8101dd2f736162b5e12e 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 99b47b6194a3e7637a456be06a3d263cb0f78e82..3853a7350440094794a1fc9b93e3ec61afc30e84 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 7ccb073f8316d48b6136c60634465a0d9c67dc78..6858c0893c91fbeaa2dc42d01bafa1597fc344e5 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index a2d37d610639afaf4fc1f23e6742ae74001f6bdd..84bdef0136cbc15a6b9aea1c7c435f57452445fa 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 24d32968802d5fa4ba7af2583629b720b3a5bc38..c736adf5fb55241ecb564a796496b2fad32e3b67 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index d470046a6d81ae9e89b0da80173cb4a277bcfce2..8f2fe168228eb12d9e97efa085d9d5d5d41d94f3 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 853b4da22c3e12ef509e26db30651cb6e12de5cd..d0ba6535f5af0d0641487cda70ebb5b89ce24784 100644 (file)
@@ -5,7 +5,7 @@
  *****************************************************************************/
 
 /*
- * Copyright (C) 2000 - 2014, Intel Corp.
+ * Copyright (C) 2000 - 2015, Intel Corp.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
index 2e2ba2efa0d9f97629ec5af9fb1136b3dee8962d..3ed7c0476d486d21dcb1c735478191ffa88c2fc7 100644 (file)
@@ -209,7 +209,7 @@ $(OUTPUT)%.o: %.c
 
 $(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ)
        $(ECHO) "  CC      " $@
-       $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@
+       $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@
        $(QUIET) $(STRIPCMD) $@
 
 $(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
index 56bfb523c5bb0d277031e8a4b80dfdeb141a1099..9b950699e63d9d5fcf057d9f499071afa3d4b9ba 100644 (file)
@@ -12,16 +12,16 @@ turbostat \- Report processor frequency and idle statistics
 .RB [ "\-i interval_sec" ]
 .SH DESCRIPTION
 \fBturbostat \fP reports processor topology, frequency,
-idle power-state statistics, temperature and power on modern X86 processors.
-Either \fBcommand\fP is forked and statistics are printed
-upon its completion, or statistics are printed periodically.
-
-\fBturbostat \fP
-must be run on root, and
-minimally requires that the processor
-supports an "invariant" TSC, plus the APERF and MPERF MSRs.
-Additional information is reported depending on hardware counter support.
-
+idle power-state statistics, temperature and power on X86 processors.
+There are two ways to invoke turbostat.
+The first method is to supply a
+\fBcommand\fP, which is forked and statistics are printed
+upon its completion.
+The second method is to omit the command,
+and turbodstat will print statistics every 5 seconds.
+The 5-second interval can changed using the -i option.
+
+Some information is not availalbe on older processors.
 .SS Options
 The \fB-p\fP option limits output to the 1st thread in 1st core of each package.
 .PP
@@ -130,12 +130,13 @@ cpu3: MSR_IA32_THERM_STATUS: 0x884e0000 (27 C +/- 1)
  ...
 .fi
 The \fBmax efficiency\fP frequency, a.k.a. Low Frequency Mode, is the frequency
-available at the minimum package voltage.  The \fBTSC frequency\fP is the nominal
-maximum frequency of the processor if turbo-mode were not available.  This frequency
+available at the minimum package voltage.  The \fBTSC frequency\fP is the base
+frequency of the processor -- this should match the brand string
+in /proc/cpuinfo.  This base frequency
 should be sustainable on all CPUs indefinitely, given nominal power and cooling.
 The remaining rows show what maximum turbo frequency is possible
-depending on the number of idle cores.  Note that this information is
-not available on all processors.
+depending on the number of idle cores.  Note that not all information is
+available on all processors.
 .SH FORK EXAMPLE
 If turbostat is invoked with a command, it will fork that command
 and output the statistics gathered when the command exits.
@@ -176,6 +177,11 @@ not including any non-busy idle time.
 
 .B "turbostat "
 must be run as root.
+Alternatively, non-root users can be enabled to run turbostat this way:
+
+# setcap cap_sys_rawio=ep ./turbostat
+
+# chmod +r /dev/cpu/*/msr
 
 .B "turbostat "
 reads hardware counters, but doesn't write them.
@@ -184,15 +190,33 @@ multiple invocations of itself.
 
 \fBturbostat \fP
 may work poorly on Linux-2.6.20 through 2.6.29,
-as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF
+as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF MSRs
 in those kernels.
 
-If the TSC column does not make sense, then
-the other numbers will also make no sense.
-Turbostat is lightweight, and its data collection is not atomic.
-These issues are usually caused by an extremely short measurement
-interval (much less than 1 second), or system activity that prevents
-turbostat from being able to run on all CPUS to quickly collect data.
+AVG_MHz = APERF_delta/measurement_interval.  This is the actual
+number of elapsed cycles divided by the entire sample interval --
+including idle time.  Note that this calculation is resiliant
+to systems lacking a non-stop TSC.
+
+TSC_MHz = TSC_delta/measurement_interval.
+On a system with an invariant TSC, this value will be constant
+and will closely match the base frequency value shown
+in the brand string in /proc/cpuinfo.  On a system where
+the TSC stops in idle, TSC_MHz will drop
+below the processor's base frequency.
+
+%Busy = MPERF_delta/TSC_delta
+
+Bzy_MHz = TSC_delta/APERF_delta/MPERF_delta/measurement_interval
+
+Note that these calculations depend on TSC_delta, so they
+are not reliable during intervals when TSC_MHz is not running at the base frequency.
+
+Turbostat data collection is not atomic.
+Extremely short measurement intervals (much less than 1 second),
+or system activity that prevents turbostat from being able
+to run on all CPUS to quickly collect data, will result in
+inconsistent results.
 
 The APERF, MPERF MSRs are defined to count non-halted cycles.
 Although it is not guaranteed by the architecture, turbostat assumes
index 5b1b807265a15e826e174a197c06b6c0788ff9b7..a02c02f25e88a273ac92ed7b1b1a2d1bf801a17c 100644 (file)
@@ -38,6 +38,8 @@
 #include <ctype.h>
 #include <sched.h>
 #include <cpuid.h>
+#include <linux/capability.h>
+#include <errno.h>
 
 char *proc_stat = "/proc/stat";
 unsigned int interval_sec = 5; /* set with -i interval_sec */
@@ -59,8 +61,8 @@ unsigned int has_epb;
 unsigned int units = 1000000;  /* MHz etc */
 unsigned int genuine_intel;
 unsigned int has_invariant_tsc;
-unsigned int do_nehalem_platform_info;
-unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int do_nhm_platform_info;
+unsigned int do_nhm_turbo_ratio_limit;
 unsigned int do_ivt_turbo_ratio_limit;
 unsigned int extra_msr_offset32;
 unsigned int extra_msr_offset64;
@@ -81,6 +83,9 @@ unsigned int tcc_activation_temp;
 unsigned int tcc_activation_temp_override;
 double rapl_power_units, rapl_energy_units, rapl_time_units;
 double rapl_joule_counter_range;
+unsigned int do_core_perf_limit_reasons;
+unsigned int do_gfx_perf_limit_reasons;
+unsigned int do_ring_perf_limit_reasons;
 
 #define RAPL_PKG               (1 << 0)
                                        /* 0x610 MSR_PKG_POWER_LIMIT */
@@ -251,15 +256,13 @@ int get_msr(int cpu, off_t offset, unsigned long long *msr)
        sprintf(pathname, "/dev/cpu/%d/msr", cpu);
        fd = open(pathname, O_RDONLY);
        if (fd < 0)
-               return -1;
+               err(-1, "%s open failed, try chown or chmod +r /dev/cpu/*/msr, or run as root", pathname);
 
        retval = pread(fd, msr, sizeof *msr, offset);
        close(fd);
 
-       if (retval != sizeof *msr) {
-               fprintf(stderr, "%s offset 0x%llx read failed\n", pathname, (unsigned long long)offset);
-               return -1;
-       }
+       if (retval != sizeof *msr)
+               err(-1, "%s offset 0x%llx read failed", pathname, (unsigned long long)offset);
 
        return 0;
 }
@@ -281,7 +284,7 @@ void print_header(void)
                outp += sprintf(outp, "     CPU");
        if (has_aperf)
                outp += sprintf(outp, " Avg_MHz");
-       if (do_nhm_cstates)
+       if (has_aperf)
                outp += sprintf(outp, "   %%Busy");
        if (has_aperf)
                outp += sprintf(outp, " Bzy_MHz");
@@ -337,7 +340,7 @@ void print_header(void)
                        outp += sprintf(outp, "   PKG_%%");
                if (do_rapl & RAPL_DRAM_PERF_STATUS)
                        outp += sprintf(outp, "   RAM_%%");
-       } else {
+       } else if (do_rapl && rapl_joules) {
                if (do_rapl & RAPL_PKG)
                        outp += sprintf(outp, "   Pkg_J");
                if (do_rapl & RAPL_CORES)
@@ -457,25 +460,25 @@ int format_counters(struct thread_data *t, struct core_data *c,
                        outp += sprintf(outp, "%8d", t->cpu_id);
        }
 
-       /* AvgMHz */
+       /* Avg_MHz */
        if (has_aperf)
                outp += sprintf(outp, "%8.0f",
                        1.0 / units * t->aperf / interval_float);
 
-       /* %c0 */
-       if (do_nhm_cstates) {
+       /* %Busy */
+       if (has_aperf) {
                if (!skip_c0)
                        outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc);
                else
                        outp += sprintf(outp, "********");
        }
 
-       /* BzyMHz */
+       /* Bzy_MHz */
        if (has_aperf)
                outp += sprintf(outp, "%8.0f",
                        1.0 * t->tsc / units * t->aperf / t->mperf / interval_float);
 
-       /* TSC */
+       /* TSC_MHz */
        outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float);
 
        /* SMI */
@@ -561,7 +564,7 @@ int format_counters(struct thread_data *t, struct core_data *c,
                        outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float);
                if (do_rapl & RAPL_DRAM_PERF_STATUS)
                        outp += sprintf(outp, fmt8, 100.0 * p->rapl_dram_perf_status * rapl_time_units / interval_float);
-       } else {
+       } else if (do_rapl && rapl_joules) {
                if (do_rapl & RAPL_PKG)
                        outp += sprintf(outp, fmt8,
                                        p->energy_pkg * rapl_energy_units);
@@ -578,8 +581,8 @@ int format_counters(struct thread_data *t, struct core_data *c,
                        outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float);
                if (do_rapl & RAPL_DRAM_PERF_STATUS)
                        outp += sprintf(outp, fmt8, 100.0 * p->rapl_dram_perf_status * rapl_time_units / interval_float);
-       outp += sprintf(outp, fmt8, interval_float);
 
+               outp += sprintf(outp, fmt8, interval_float);
        }
 done:
        outp += sprintf(outp, "\n");
@@ -670,24 +673,26 @@ delta_thread(struct thread_data *new, struct thread_data *old,
 
        old->c1 = new->c1 - old->c1;
 
-       if ((new->aperf > old->aperf) && (new->mperf > old->mperf)) {
-               old->aperf = new->aperf - old->aperf;
-               old->mperf = new->mperf - old->mperf;
-       } else {
+       if (has_aperf) {
+               if ((new->aperf > old->aperf) && (new->mperf > old->mperf)) {
+                       old->aperf = new->aperf - old->aperf;
+                       old->mperf = new->mperf - old->mperf;
+               } else {
 
-               if (!aperf_mperf_unstable) {
-                       fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
-                       fprintf(stderr, "* Frequency results do not cover entire interval *\n");
-                       fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+                       if (!aperf_mperf_unstable) {
+                               fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+                               fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+                               fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
 
-                       aperf_mperf_unstable = 1;
+                               aperf_mperf_unstable = 1;
+                       }
+                       /*
+                        * mperf delta is likely a huge "positive" number
+                        * can not use it for calculating c0 time
+                        */
+                       skip_c0 = 1;
+                       skip_c1 = 1;
                }
-               /*
-                * mperf delta is likely a huge "positive" number
-                * can not use it for calculating c0 time
-                */
-               skip_c0 = 1;
-               skip_c1 = 1;
        }
 
 
@@ -1019,7 +1024,7 @@ void print_verbose_header(void)
        unsigned long long msr;
        unsigned int ratio;
 
-       if (!do_nehalem_platform_info)
+       if (!do_nhm_platform_info)
                return;
 
        get_msr(0, MSR_NHM_PLATFORM_INFO, &msr);
@@ -1132,7 +1137,7 @@ print_nhm_turbo_ratio_limits:
        }
        fprintf(stderr, ")\n");
 
-       if (!do_nehalem_turbo_ratio_limit)
+       if (!do_nhm_turbo_ratio_limit)
                return;
 
        get_msr(0, MSR_NHM_TURBO_RATIO_LIMIT, &msr);
@@ -1178,6 +1183,7 @@ print_nhm_turbo_ratio_limits:
        if (ratio)
                fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
                        ratio, bclk, ratio * bclk);
+
 }
 
 void free_all_buffers(void)
@@ -1458,17 +1464,60 @@ void check_dev_msr()
        struct stat sb;
 
        if (stat("/dev/cpu/0/msr", &sb))
-               err(-5, "no /dev/cpu/0/msr\n"
-                   "Try \"# modprobe msr\"");
+               err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" ");
 }
 
-void check_super_user()
+void check_permissions()
 {
-       if (getuid() != 0)
-               errx(-6, "must be root");
+       struct __user_cap_header_struct cap_header_data;
+       cap_user_header_t cap_header = &cap_header_data;
+       struct __user_cap_data_struct cap_data_data;
+       cap_user_data_t cap_data = &cap_data_data;
+       extern int capget(cap_user_header_t hdrp, cap_user_data_t datap);
+       int do_exit = 0;
+
+       /* check for CAP_SYS_RAWIO */
+       cap_header->pid = getpid();
+       cap_header->version = _LINUX_CAPABILITY_VERSION;
+       if (capget(cap_header, cap_data) < 0)
+               err(-6, "capget(2) failed");
+
+       if ((cap_data->effective & (1 << CAP_SYS_RAWIO)) == 0) {
+               do_exit++;
+               warnx("capget(CAP_SYS_RAWIO) failed,"
+                       " try \"# setcap cap_sys_rawio=ep %s\"", progname);
+       }
+
+       /* test file permissions */
+       if (euidaccess("/dev/cpu/0/msr", R_OK)) {
+               do_exit++;
+               warn("/dev/cpu/0/msr open failed, try chown or chmod +r /dev/cpu/*/msr");
+       }
+
+       /* if all else fails, thell them to be root */
+       if (do_exit)
+               if (getuid() != 0)
+                       warnx("... or simply run as root");
+
+       if (do_exit)
+               exit(-6);
 }
 
-int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+/*
+ * NHM adds support for additional MSRs:
+ *
+ * MSR_SMI_COUNT                   0x00000034
+ *
+ * MSR_NHM_PLATFORM_INFO           0x000000ce
+ * MSR_NHM_SNB_PKG_CST_CFG_CTL     0x000000e2
+ *
+ * MSR_PKG_C3_RESIDENCY            0x000003f8
+ * MSR_PKG_C6_RESIDENCY            0x000003f9
+ * MSR_CORE_C3_RESIDENCY           0x000003fc
+ * MSR_CORE_C6_RESIDENCY           0x000003fd
+ *
+ */
+int has_nhm_msrs(unsigned int family, unsigned int model)
 {
        if (!genuine_intel)
                return 0;
@@ -1495,13 +1544,27 @@ int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
        case 0x3D:      /* BDW */
        case 0x4F:      /* BDX */
        case 0x56:      /* BDX-DE */
-               return 1;
        case 0x2E:      /* Nehalem-EX Xeon - Beckton */
        case 0x2F:      /* Westmere-EX Xeon - Eagleton */
+               return 1;
        default:
                return 0;
        }
 }
+int has_nhm_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+       if (!has_nhm_msrs(family, model))
+               return 0;
+
+       switch (model) {
+       /* Nehalem compatible, but do not include turbo-ratio limit support */
+       case 0x2E:      /* Nehalem-EX Xeon - Beckton */
+       case 0x2F:      /* Westmere-EX Xeon - Eagleton */
+               return 0;
+       default:
+               return 1;
+       }
+}
 int has_ivt_turbo_ratio_limit(unsigned int family, unsigned int model)
 {
        if (!genuine_intel)
@@ -1564,6 +1627,103 @@ int print_epb(struct thread_data *t, struct core_data *c, struct pkg_data *p)
        return 0;
 }
 
+/*
+ * print_perf_limit()
+ */
+int print_perf_limit(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+{
+       unsigned long long msr;
+       int cpu;
+
+       cpu = t->cpu_id;
+
+       /* per-package */
+       if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE) || !(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
+               return 0;
+
+       if (cpu_migrate(cpu)) {
+               fprintf(stderr, "Could not migrate to CPU %d\n", cpu);
+               return -1;
+       }
+
+       if (do_core_perf_limit_reasons) {
+               get_msr(cpu, MSR_CORE_PERF_LIMIT_REASONS, &msr);
+               fprintf(stderr, "cpu%d: MSR_CORE_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr);
+               fprintf(stderr, " (Active: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)",
+                       (msr & 1 << 0) ? "PROCHOT, " : "",
+                       (msr & 1 << 1) ? "ThermStatus, " : "",
+                       (msr & 1 << 2) ? "bit2, " : "",
+                       (msr & 1 << 4) ? "Graphics, " : "",
+                       (msr & 1 << 5) ? "Auto-HWP, " : "",
+                       (msr & 1 << 6) ? "VR-Therm, " : "",
+                       (msr & 1 << 8) ? "Amps, " : "",
+                       (msr & 1 << 9) ? "CorePwr, " : "",
+                       (msr & 1 << 10) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 11) ? "PkgPwrL2, " : "",
+                       (msr & 1 << 12) ? "MultiCoreTurbo, " : "",
+                       (msr & 1 << 13) ? "Transitions, " : "",
+                       (msr & 1 << 14) ? "bit14, " : "",
+                       (msr & 1 << 15) ? "bit15, " : "");
+               fprintf(stderr, " (Logged: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
+                       (msr & 1 << 16) ? "PROCHOT, " : "",
+                       (msr & 1 << 17) ? "ThermStatus, " : "",
+                       (msr & 1 << 18) ? "bit18, " : "",
+                       (msr & 1 << 20) ? "Graphics, " : "",
+                       (msr & 1 << 21) ? "Auto-HWP, " : "",
+                       (msr & 1 << 22) ? "VR-Therm, " : "",
+                       (msr & 1 << 24) ? "Amps, " : "",
+                       (msr & 1 << 25) ? "CorePwr, " : "",
+                       (msr & 1 << 26) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 27) ? "PkgPwrL2, " : "",
+                       (msr & 1 << 28) ? "MultiCoreTurbo, " : "",
+                       (msr & 1 << 29) ? "Transitions, " : "",
+                       (msr & 1 << 30) ? "bit30, " : "",
+                       (msr & 1 << 31) ? "bit31, " : "");
+
+       }
+       if (do_gfx_perf_limit_reasons) {
+               get_msr(cpu, MSR_GFX_PERF_LIMIT_REASONS, &msr);
+               fprintf(stderr, "cpu%d: MSR_GFX_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr);
+               fprintf(stderr, " (Active: %s%s%s%s%s%s%s%s)",
+                       (msr & 1 << 0) ? "PROCHOT, " : "",
+                       (msr & 1 << 1) ? "ThermStatus, " : "",
+                       (msr & 1 << 4) ? "Graphics, " : "",
+                       (msr & 1 << 6) ? "VR-Therm, " : "",
+                       (msr & 1 << 8) ? "Amps, " : "",
+                       (msr & 1 << 9) ? "GFXPwr, " : "",
+                       (msr & 1 << 10) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 11) ? "PkgPwrL2, " : "");
+               fprintf(stderr, " (Logged: %s%s%s%s%s%s%s%s)\n",
+                       (msr & 1 << 16) ? "PROCHOT, " : "",
+                       (msr & 1 << 17) ? "ThermStatus, " : "",
+                       (msr & 1 << 20) ? "Graphics, " : "",
+                       (msr & 1 << 22) ? "VR-Therm, " : "",
+                       (msr & 1 << 24) ? "Amps, " : "",
+                       (msr & 1 << 25) ? "GFXPwr, " : "",
+                       (msr & 1 << 26) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 27) ? "PkgPwrL2, " : "");
+       }
+       if (do_ring_perf_limit_reasons) {
+               get_msr(cpu, MSR_RING_PERF_LIMIT_REASONS, &msr);
+               fprintf(stderr, "cpu%d: MSR_RING_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr);
+               fprintf(stderr, " (Active: %s%s%s%s%s%s)",
+                       (msr & 1 << 0) ? "PROCHOT, " : "",
+                       (msr & 1 << 1) ? "ThermStatus, " : "",
+                       (msr & 1 << 6) ? "VR-Therm, " : "",
+                       (msr & 1 << 8) ? "Amps, " : "",
+                       (msr & 1 << 10) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 11) ? "PkgPwrL2, " : "");
+               fprintf(stderr, " (Logged: %s%s%s%s%s%s)\n",
+                       (msr & 1 << 16) ? "PROCHOT, " : "",
+                       (msr & 1 << 17) ? "ThermStatus, " : "",
+                       (msr & 1 << 22) ? "VR-Therm, " : "",
+                       (msr & 1 << 24) ? "Amps, " : "",
+                       (msr & 1 << 26) ? "PkgPwrL1, " : "",
+                       (msr & 1 << 27) ? "PkgPwrL2, " : "");
+       }
+       return 0;
+}
+
 #define        RAPL_POWER_GRANULARITY  0x7FFF  /* 15 bit power granularity */
 #define        RAPL_TIME_GRANULARITY   0x3F /* 6 bit time granularity */
 
@@ -1653,6 +1813,27 @@ void rapl_probe(unsigned int family, unsigned int model)
        return;
 }
 
+void perf_limit_reasons_probe(family, model)
+{
+       if (!genuine_intel)
+               return;
+
+       if (family != 6)
+               return;
+
+       switch (model) {
+       case 0x3C:      /* HSW */
+       case 0x45:      /* HSW */
+       case 0x46:      /* HSW */
+               do_gfx_perf_limit_reasons = 1;
+       case 0x3F:      /* HSX */
+               do_core_perf_limit_reasons = 1;
+               do_ring_perf_limit_reasons = 1;
+       default:
+               return;
+       }
+}
+
 int print_thermal(struct thread_data *t, struct core_data *c, struct pkg_data *p)
 {
        unsigned long long msr;
@@ -1842,8 +2023,15 @@ int print_rapl(struct thread_data *t, struct core_data *c, struct pkg_data *p)
        return 0;
 }
 
+/*
+ * SNB adds support for additional MSRs:
+ *
+ * MSR_PKG_C7_RESIDENCY            0x000003fa
+ * MSR_CORE_C7_RESIDENCY           0x000003fe
+ * MSR_PKG_C2_RESIDENCY            0x0000060d
+ */
 
-int is_snb(unsigned int family, unsigned int model)
+int has_snb_msrs(unsigned int family, unsigned int model)
 {
        if (!genuine_intel)
                return 0;
@@ -1865,7 +2053,14 @@ int is_snb(unsigned int family, unsigned int model)
        return 0;
 }
 
-int has_c8_c9_c10(unsigned int family, unsigned int model)
+/*
+ * HSW adds support for additional MSRs:
+ *
+ * MSR_PKG_C8_RESIDENCY            0x00000630
+ * MSR_PKG_C9_RESIDENCY            0x00000631
+ * MSR_PKG_C10_RESIDENCY           0x00000632
+ */
+int has_hsw_msrs(unsigned int family, unsigned int model)
 {
        if (!genuine_intel)
                return 0;
@@ -1917,7 +2112,7 @@ double slm_bclk(void)
 
 double discover_bclk(unsigned int family, unsigned int model)
 {
-       if (is_snb(family, model))
+       if (has_snb_msrs(family, model))
                return 100.00;
        else if (is_slm(family, model))
                return slm_bclk();
@@ -1965,7 +2160,7 @@ int set_temperature_target(struct thread_data *t, struct core_data *c, struct pk
        }
 
        /* Temperature Target MSR is Nehalem and newer only */
-       if (!do_nehalem_platform_info)
+       if (!do_nhm_platform_info)
                goto guess;
 
        if (get_msr(0, MSR_IA32_TEMPERATURE_TARGET, &msr))
@@ -2029,18 +2224,15 @@ void check_cpuid()
        ebx = ecx = edx = 0;
        __get_cpuid(0x80000000, &max_level, &ebx, &ecx, &edx);
 
-       if (max_level < 0x80000007)
-               errx(1, "CPUID: no invariant TSC (max_level 0x%x)", max_level);
+       if (max_level >= 0x80000007) {
 
-       /*
-        * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
-        * this check is valid for both Intel and AMD
-        */
-       __get_cpuid(0x80000007, &eax, &ebx, &ecx, &edx);
-       has_invariant_tsc = edx & (1 << 8);
-
-       if (!has_invariant_tsc)
-               errx(1, "No invariant TSC");
+               /*
+                * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+                * this check is valid for both Intel and AMD
+                */
+               __get_cpuid(0x80000007, &eax, &ebx, &ecx, &edx);
+               has_invariant_tsc = edx & (1 << 8);
+       }
 
        /*
         * APERF/MPERF is advertised by CPUID.EAX=0x6: ECX.bit0
@@ -2054,26 +2246,22 @@ void check_cpuid()
        has_epb = ecx & (1 << 3);
 
        if (verbose)
-               fprintf(stderr, "CPUID(6): %s%s%s%s\n",
-                       has_aperf ? "APERF" : "No APERF!",
-                       do_dts ? ", DTS" : "",
-                       do_ptm ? ", PTM": "",
-                       has_epb ? ", EPB": "");
-
-       if (!has_aperf)
-               errx(-1, "No APERF");
-
-       do_nehalem_platform_info = genuine_intel && has_invariant_tsc;
-       do_nhm_cstates = genuine_intel; /* all Intel w/ non-stop TSC have NHM counters */
-       do_smi = do_nhm_cstates;
-       do_snb_cstates = is_snb(family, model);
-       do_c8_c9_c10 = has_c8_c9_c10(family, model);
+               fprintf(stderr, "CPUID(6): %sAPERF, %sDTS, %sPTM, %sEPB\n",
+                       has_aperf ? "" : "No ",
+                       do_dts ? "" : "No ",
+                       do_ptm ? "" : "No ",
+                       has_epb ? "" : "No ");
+
+       do_nhm_platform_info = do_nhm_cstates = do_smi = has_nhm_msrs(family, model);
+       do_snb_cstates = has_snb_msrs(family, model);
+       do_c8_c9_c10 = has_hsw_msrs(family, model);
        do_slm_cstates = is_slm(family, model);
        bclk = discover_bclk(family, model);
 
-       do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+       do_nhm_turbo_ratio_limit = has_nhm_turbo_ratio_limit(family, model);
        do_ivt_turbo_ratio_limit = has_ivt_turbo_ratio_limit(family, model);
        rapl_probe(family, model);
+       perf_limit_reasons_probe(family, model);
 
        return;
 }
@@ -2299,10 +2487,9 @@ void setup_all_buffers(void)
 
 void turbostat_init()
 {
-       check_cpuid();
-
        check_dev_msr();
-       check_super_user();
+       check_permissions();
+       check_cpuid();
 
        setup_all_buffers();
 
@@ -2312,6 +2499,9 @@ void turbostat_init()
        if (verbose)
                for_all_cpus(print_epb, ODD_COUNTERS);
 
+       if (verbose)
+               for_all_cpus(print_perf_limit, ODD_COUNTERS);
+
        if (verbose)
                for_all_cpus(print_rapl, ODD_COUNTERS);
 
@@ -2441,7 +2631,7 @@ int main(int argc, char **argv)
        cmdline(argc, argv);
 
        if (verbose)
-               fprintf(stderr, "turbostat v3.7 Feb 6, 2014"
+               fprintf(stderr, "turbostat v3.9 23-Jan, 2015"
                        " - Len Brown <lenb@kernel.org>\n");
 
        turbostat_init();