mirror of
https://github.com/tbsdtv/linux_media.git
synced 2025-07-23 20:51:03 +02:00
Merge tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas: "Enumeration: - Remove unnecessary locking around _OSC (Bjorn Helgaas) - Clarify message about _OSC failure (Bjorn Helgaas) - Remove notification of PCIe bandwidth changes (Bjorn Helgaas) - Tidy checking of syscall user config accessors (Heiner Kallweit) Resource management: - Decline to resize resources if boot config must be preserved (Ard Biesheuvel) - Fix pci_register_io_range() memory leak (Geert Uytterhoeven) Error handling (Keith Busch): - Clear error status from the correct device - Retain error recovery status so drivers can use it after reset - Log the type of Port (Root or Switch Downstream) that we reset - Always request a reset for Downstream Ports in frozen state Endpoint framework and NTB (Kishon Vijay Abraham I): - Make *_get_first_free_bar() take into account 64 bit BAR - Add helper API to get the 'next' unreserved BAR - Make *_free_bar() return error codes on failure - Remove unused pci_epf_match_device() - Add support to associate secondary EPC with EPF - Add support in configfs to associate two EPCs with EPF - Add pci_epc_ops to map MSI IRQ - Add pci_epf_ops to expose function-specific attrs - Allow user to create sub-directory of 'EPF Device' directory - Implement ->msi_map_irq() ops for cadence - Configure LM_EP_FUNC_CFG based on epc->function_num_map for cadence - Add EP function driver to provide NTB functionality - Add support for EPF PCI Non-Transparent Bridge - Add specification for PCI NTB function device - Add PCI endpoint NTB function user guide - Add configfs binding documentation for pci-ntb endpoint function Broadcom STB PCIe controller driver: - Add support for BCM4908 and external PERST# signal controller (Rafał Miłecki) Cadence PCIe controller driver: - Retrain Link to work around Gen2 training defect (Nadeem Athani) - Fix merge botch in cdns_pcie_host_map_dma_ranges() (Krzysztof Wilczyński) Freescale Layerscape PCIe controller driver: - Add LX2160A rev2 EP mode support (Hou Zhiqiang) - Convert to builtin_platform_driver() (Michael Walle) MediaTek PCIe controller driver: - Fix OF node reference leak (Krzysztof Wilczyński) Microchip PolarFlare PCIe controller driver: - Add Microchip PolarFire PCIe controller driver (Daire McNamara) Qualcomm PCIe controller driver: - Use PHY_REFCLK_USE_PAD only for ipq8064 (Ansuel Smith) - Add support for ddrss_sf_tbu clock for sm8250 (Dmitry Baryshkov) Renesas R-Car PCIe controller driver: - Drop PCIE_RCAR config option (Lad Prabhakar) - Always allocate MSI addresses in 32bit space (Marek Vasut) Rockchip PCIe controller driver: - Add FriendlyARM NanoPi M4B DT binding (Chen-Yu Tsai) - Make 'ep-gpios' DT property optional (Chen-Yu Tsai) Synopsys DesignWare PCIe controller driver: - Work around ECRC configuration hardware defect (Vidya Sagar) - Drop support for config space in DT 'ranges' (Rob Herring) - Change size to u64 for EP outbound iATU (Shradha Todi) - Add upper limit address for outbound iATU (Shradha Todi) - Make dw_pcie ops optional (Jisheng Zhang) - Remove unnecessary dw_pcie_ops from al driver (Jisheng Zhang) Xilinx Versal CPM PCIe controller driver: - Fix OF node reference leak (Pan Bian) Miscellaneous: - Remove tango host controller driver (Arnd Bergmann) - Remove IRQ handler & data together (altera-msi, brcmstb, dwc) (Martin Kaiser) - Fix xgene-msi race in installing chained IRQ handler (Martin Kaiser) - Apply CONFIG_PCI_DEBUG to entire drivers/pci hierarchy (Junhao He) - Fix pci-bridge-emul array overruns (Russell King) - Remove obsolete uses of WARN_ON(in_interrupt()) (Sebastian Andrzej Siewior)" * tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (69 commits) PCI: qcom: Use PHY_REFCLK_USE_PAD only for ipq8064 PCI: qcom: Add support for ddrss_sf_tbu clock dt-bindings: PCI: qcom: Document ddrss_sf_tbu clock for sm8250 PCI: al: Remove useless dw_pcie_ops PCI: dwc: Don't assume the ops in dw_pcie always exist PCI: dwc: Add upper limit address for outbound iATU PCI: dwc: Change size to u64 for EP outbound iATU PCI: dwc: Drop support for config space in 'ranges' PCI: layerscape: Convert to builtin_platform_driver() PCI: layerscape: Add LX2160A rev2 EP mode support dt-bindings: PCI: layerscape: Add LX2160A rev2 compatible strings PCI: dwc: Work around ECRC configuration issue PCI/portdrv: Report reset for frozen channel PCI/AER: Specify the type of Port that was reset PCI/ERR: Retain status from error notification PCI/AER: Clear AER status from Root Port when resetting Downstream Port PCI/ERR: Clear status of the reporting device dt-bindings: arm: rockchip: Add FriendlyARM NanoPi M4B PCI: rockchip: Make 'ep-gpios' DT property optional Documentation: PCI: Add PCI endpoint NTB function user guide ...
This commit is contained in:
@@ -36,4 +36,4 @@ obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
|
||||
obj-y += controller/
|
||||
obj-y += switch/
|
||||
|
||||
ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
|
||||
subdir-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
|
||||
|
@@ -55,15 +55,6 @@ config PCI_RCAR_GEN2
|
||||
There are 3 internal PCI controllers available with a single
|
||||
built-in EHCI/OHCI host controller present on each one.
|
||||
|
||||
config PCIE_RCAR
|
||||
bool "Renesas R-Car PCIe controller"
|
||||
depends on ARCH_RENESAS || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_RCAR_HOST
|
||||
help
|
||||
Say Y here if you want PCIe controller support on R-Car SoCs.
|
||||
This option will be removed after arm64 defconfig is updated.
|
||||
|
||||
config PCIE_RCAR_HOST
|
||||
bool "Renesas R-Car PCIe host controller"
|
||||
depends on ARCH_RENESAS || COMPILE_TEST
|
||||
@@ -242,20 +233,6 @@ config PCIE_MEDIATEK
|
||||
Say Y here if you want to enable PCIe controller support on
|
||||
MediaTek SoCs.
|
||||
|
||||
config PCIE_TANGO_SMP8759
|
||||
bool "Tango SMP8759 PCIe controller (DANGEROUS)"
|
||||
depends on ARCH_TANGO && PCI_MSI && OF
|
||||
depends on BROKEN
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here to enable PCIe controller support for Sigma Designs
|
||||
Tango SMP8759-based systems.
|
||||
|
||||
Note: The SMP8759 controller multiplexes PCI config and MMIO
|
||||
accesses, and Linux doesn't provide a way to serialize them.
|
||||
This can lead to data corruption if drivers perform concurrent
|
||||
config and MMIO accesses.
|
||||
|
||||
config VMD
|
||||
depends on PCI_MSI && X86_64 && SRCU
|
||||
tristate "Intel Volume Management Device Driver"
|
||||
@@ -273,7 +250,7 @@ config VMD
|
||||
|
||||
config PCIE_BRCMSTB
|
||||
tristate "Broadcom Brcmstb PCIe host controller"
|
||||
depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
|
||||
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCM4908 || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
default ARCH_BRCMSTB
|
||||
@@ -298,6 +275,16 @@ config PCI_LOONGSON
|
||||
Say Y here if you want to enable PCI controller support on
|
||||
Loongson systems.
|
||||
|
||||
config PCIE_MICROCHIP_HOST
|
||||
bool "Microchip AXI PCIe host bridge support"
|
||||
depends on PCI_MSI && OF
|
||||
select PCI_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here if you want kernel to support the Microchip AXI PCIe
|
||||
Host Bridge driver.
|
||||
|
||||
config PCIE_HISI_ERR
|
||||
depends on ACPI_APEI_GHES && (ARM64 || COMPILE_TEST)
|
||||
bool "HiSilicon HIP PCIe controller error handling driver"
|
||||
|
@@ -27,7 +27,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
|
||||
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
|
||||
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
|
||||
obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
|
||||
obj-$(CONFIG_VMD) += vmd.o
|
||||
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
|
||||
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
|
||||
|
@@ -64,6 +64,7 @@ enum j721e_pcie_mode {
|
||||
|
||||
struct j721e_pcie_data {
|
||||
enum j721e_pcie_mode mode;
|
||||
bool quirk_retrain_flag;
|
||||
};
|
||||
|
||||
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
|
||||
@@ -280,6 +281,7 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
|
||||
|
||||
static const struct j721e_pcie_data j721e_pcie_rc_data = {
|
||||
.mode = PCI_MODE_RC,
|
||||
.quirk_retrain_flag = true,
|
||||
};
|
||||
|
||||
static const struct j721e_pcie_data j721e_pcie_ep_data = {
|
||||
@@ -388,6 +390,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
bridge->ops = &cdns_ti_pcie_host_ops;
|
||||
rc = pci_host_bridge_priv(bridge);
|
||||
rc->quirk_retrain_flag = data->quirk_retrain_flag;
|
||||
|
||||
cdns_pcie = &rc->pcie;
|
||||
cdns_pcie->dev = dev;
|
||||
|
@@ -382,6 +382,57 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
|
||||
phys_addr_t addr, u8 interrupt_num,
|
||||
u32 entry_size, u32 *msi_data,
|
||||
u32 *msi_addr_offset)
|
||||
{
|
||||
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
|
||||
struct cdns_pcie *pcie = &ep->pcie;
|
||||
u64 pci_addr, pci_addr_mask = 0xff;
|
||||
u16 flags, mme, data, data_mask;
|
||||
u8 msi_count;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
/* Check whether the MSI feature has been enabled by the PCI host. */
|
||||
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
|
||||
if (!(flags & PCI_MSI_FLAGS_ENABLE))
|
||||
return -EINVAL;
|
||||
|
||||
/* Get the number of enabled MSIs */
|
||||
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
|
||||
msi_count = 1 << mme;
|
||||
if (!interrupt_num || interrupt_num > msi_count)
|
||||
return -EINVAL;
|
||||
|
||||
/* Compute the data value to be written. */
|
||||
data_mask = msi_count - 1;
|
||||
data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
|
||||
data = data & ~data_mask;
|
||||
|
||||
/* Get the PCI address where to write the data into. */
|
||||
pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
|
||||
pci_addr <<= 32;
|
||||
pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
|
||||
pci_addr &= GENMASK_ULL(63, 2);
|
||||
|
||||
for (i = 0; i < interrupt_num; i++) {
|
||||
ret = cdns_pcie_ep_map_addr(epc, fn, addr,
|
||||
pci_addr & ~pci_addr_mask,
|
||||
entry_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
addr = addr + entry_size;
|
||||
}
|
||||
|
||||
*msi_data = data;
|
||||
*msi_addr_offset = pci_addr & pci_addr_mask;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
|
||||
u16 interrupt_num)
|
||||
{
|
||||
@@ -455,18 +506,13 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
|
||||
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct cdns_pcie *pcie = &ep->pcie;
|
||||
struct device *dev = pcie->dev;
|
||||
struct pci_epf *epf;
|
||||
u32 cfg;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* BIT(0) is hardwired to 1, hence function 0 is always enabled
|
||||
* and can't be disabled anyway.
|
||||
*/
|
||||
cfg = BIT(0);
|
||||
list_for_each_entry(epf, &epc->pci_epf, list)
|
||||
cfg |= BIT(epf->func_no);
|
||||
cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
|
||||
cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
|
||||
|
||||
ret = cdns_pcie_start_link(pcie);
|
||||
if (ret) {
|
||||
@@ -481,6 +527,7 @@ static const struct pci_epc_features cdns_pcie_epc_features = {
|
||||
.linkup_notifier = false,
|
||||
.msi_capable = true,
|
||||
.msix_capable = true,
|
||||
.align = 256,
|
||||
};
|
||||
|
||||
static const struct pci_epc_features*
|
||||
@@ -500,6 +547,7 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = {
|
||||
.set_msix = cdns_pcie_ep_set_msix,
|
||||
.get_msix = cdns_pcie_ep_get_msix,
|
||||
.raise_irq = cdns_pcie_ep_raise_irq,
|
||||
.map_msi_irq = cdns_pcie_ep_map_msi_irq,
|
||||
.start = cdns_pcie_ep_start,
|
||||
.get_features = cdns_pcie_ep_get_features,
|
||||
};
|
||||
|
@@ -77,6 +77,68 @@ static struct pci_ops cdns_pcie_host_ops = {
|
||||
.write = pci_generic_config_write,
|
||||
};
|
||||
|
||||
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
|
||||
{
|
||||
struct device *dev = pcie->dev;
|
||||
int retries;
|
||||
|
||||
/* Check if the link is up or not */
|
||||
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
||||
if (cdns_pcie_link_up(pcie)) {
|
||||
dev_info(dev, "Link up\n");
|
||||
return 0;
|
||||
}
|
||||
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
|
||||
}
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int cdns_pcie_retrain(struct cdns_pcie *pcie)
|
||||
{
|
||||
u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
|
||||
u16 lnk_stat, lnk_ctl;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* Set retrain bit if current speed is 2.5 GB/s,
|
||||
* but the PCIe root port support is > 2.5 GB/s.
|
||||
*/
|
||||
|
||||
lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
|
||||
PCI_EXP_LNKCAP));
|
||||
if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
|
||||
return ret;
|
||||
|
||||
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
|
||||
if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
|
||||
lnk_ctl = cdns_pcie_rp_readw(pcie,
|
||||
pcie_cap_off + PCI_EXP_LNKCTL);
|
||||
lnk_ctl |= PCI_EXP_LNKCTL_RL;
|
||||
cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
|
||||
lnk_ctl);
|
||||
|
||||
ret = cdns_pcie_host_wait_for_link(pcie);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
|
||||
{
|
||||
struct cdns_pcie *pcie = &rc->pcie;
|
||||
int ret;
|
||||
|
||||
ret = cdns_pcie_host_wait_for_link(pcie);
|
||||
|
||||
/*
|
||||
* Retrain link for Gen2 training defect
|
||||
* if quirk flag is set.
|
||||
*/
|
||||
if (!ret && rc->quirk_retrain_flag)
|
||||
ret = cdns_pcie_retrain(pcie);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
|
||||
{
|
||||
@@ -321,9 +383,10 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
|
||||
|
||||
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
|
||||
err = cdns_pcie_host_bar_config(rc, entry);
|
||||
if (err)
|
||||
if (err) {
|
||||
dev_err(dev, "Fail to configure IB using dma-ranges\n");
|
||||
return err;
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -398,23 +461,6 @@ static int cdns_pcie_host_init(struct device *dev,
|
||||
return cdns_pcie_host_init_address_translation(rc);
|
||||
}
|
||||
|
||||
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
|
||||
{
|
||||
struct device *dev = pcie->dev;
|
||||
int retries;
|
||||
|
||||
/* Check if the link is up or not */
|
||||
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
||||
if (cdns_pcie_link_up(pcie)) {
|
||||
dev_info(dev, "Link up\n");
|
||||
return 0;
|
||||
}
|
||||
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
|
||||
}
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
|
||||
{
|
||||
struct device *dev = rc->pcie.dev;
|
||||
@@ -457,7 +503,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = cdns_pcie_host_wait_for_link(pcie);
|
||||
ret = cdns_pcie_host_start_link(rc);
|
||||
if (ret)
|
||||
dev_dbg(dev, "PCIe link never came up\n");
|
||||
|
||||
|
@@ -119,7 +119,7 @@
|
||||
* Root Port Registers (PCI configuration space for the root port function)
|
||||
*/
|
||||
#define CDNS_PCIE_RP_BASE 0x00200000
|
||||
|
||||
#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
|
||||
|
||||
/*
|
||||
* Address Translation Registers
|
||||
@@ -291,6 +291,7 @@ struct cdns_pcie {
|
||||
* @device_id: PCI device ID
|
||||
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
|
||||
* available
|
||||
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
|
||||
*/
|
||||
struct cdns_pcie_rc {
|
||||
struct cdns_pcie pcie;
|
||||
@@ -299,6 +300,7 @@ struct cdns_pcie_rc {
|
||||
u32 vendor_id;
|
||||
u32 device_id;
|
||||
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
|
||||
bool quirk_retrain_flag;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -414,6 +416,13 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
|
||||
cdns_pcie_write_sz(addr, 0x2, value);
|
||||
}
|
||||
|
||||
static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
|
||||
{
|
||||
void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
|
||||
|
||||
return cdns_pcie_read_sz(addr, 0x2);
|
||||
}
|
||||
|
||||
/* Endpoint Function register access */
|
||||
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
|
||||
u32 reg, u8 value)
|
||||
|
@@ -115,10 +115,17 @@ static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = {
|
||||
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
|
||||
};
|
||||
|
||||
static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = {
|
||||
.func_offset = 0x8000,
|
||||
.ops = &ls_pcie_ep_ops,
|
||||
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
|
||||
};
|
||||
|
||||
static const struct of_device_id ls_pcie_ep_of_match[] = {
|
||||
{ .compatible = "fsl,ls1046a-pcie-ep", .data = &ls1_ep_drvdata },
|
||||
{ .compatible = "fsl,ls1088a-pcie-ep", .data = &ls2_ep_drvdata },
|
||||
{ .compatible = "fsl,ls2088a-pcie-ep", .data = &ls2_ep_drvdata },
|
||||
{ .compatible = "fsl,lx2160ar2-pcie-ep", .data = &lx2_ep_drvdata },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
@@ -232,7 +232,7 @@ static const struct of_device_id ls_pcie_of_match[] = {
|
||||
{ },
|
||||
};
|
||||
|
||||
static int __init ls_pcie_probe(struct platform_device *pdev)
|
||||
static int ls_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct dw_pcie *pci;
|
||||
@@ -271,10 +271,11 @@ static int __init ls_pcie_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
static struct platform_driver ls_pcie_driver = {
|
||||
.probe = ls_pcie_probe,
|
||||
.driver = {
|
||||
.name = "layerscape-pcie",
|
||||
.of_match_table = ls_pcie_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
builtin_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);
|
||||
builtin_platform_driver(ls_pcie_driver);
|
||||
|
@@ -314,9 +314,6 @@ static const struct dw_pcie_host_ops al_pcie_host_ops = {
|
||||
.host_init = al_pcie_host_init,
|
||||
};
|
||||
|
||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
};
|
||||
|
||||
static int al_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
@@ -334,7 +331,6 @@ static int al_pcie_probe(struct platform_device *pdev)
|
||||
return -ENOMEM;
|
||||
|
||||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
pci->pp.ops = &al_pcie_host_ops;
|
||||
|
||||
al_pcie->pci = pci;
|
||||
|
@@ -434,10 +434,8 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
||||
if (!pci->ops->stop_link)
|
||||
return;
|
||||
|
||||
pci->ops->stop_link(pci);
|
||||
if (pci->ops && pci->ops->stop_link)
|
||||
pci->ops->stop_link(pci);
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_start(struct pci_epc *epc)
|
||||
@@ -445,7 +443,7 @@ static int dw_pcie_ep_start(struct pci_epc *epc)
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
||||
if (!pci->ops->start_link)
|
||||
if (!pci->ops || !pci->ops->start_link)
|
||||
return -EINVAL;
|
||||
|
||||
return pci->ops->start_link(pci);
|
||||
|
@@ -258,10 +258,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
|
||||
|
||||
static void dw_pcie_free_msi(struct pcie_port *pp)
|
||||
{
|
||||
if (pp->msi_irq) {
|
||||
irq_set_chained_handler(pp->msi_irq, NULL);
|
||||
irq_set_handler_data(pp->msi_irq, NULL);
|
||||
}
|
||||
if (pp->msi_irq)
|
||||
irq_set_chained_handler_and_data(pp->msi_irq, NULL, NULL);
|
||||
|
||||
irq_domain_remove(pp->msi_domain);
|
||||
irq_domain_remove(pp->irq_domain);
|
||||
@@ -305,8 +303,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
||||
if (cfg_res) {
|
||||
pp->cfg0_size = resource_size(cfg_res);
|
||||
pp->cfg0_base = cfg_res->start;
|
||||
} else if (!pp->va_cfg0_base) {
|
||||
|
||||
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, cfg_res);
|
||||
if (IS_ERR(pp->va_cfg0_base))
|
||||
return PTR_ERR(pp->va_cfg0_base);
|
||||
} else {
|
||||
dev_err(dev, "Missing *config* reg space\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!pci->dbi_base) {
|
||||
@@ -322,38 +325,12 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
||||
|
||||
pp->bridge = bridge;
|
||||
|
||||
/* Get the I/O and memory ranges from DT */
|
||||
resource_list_for_each_entry(win, &bridge->windows) {
|
||||
switch (resource_type(win->res)) {
|
||||
case IORESOURCE_IO:
|
||||
pp->io_size = resource_size(win->res);
|
||||
pp->io_bus_addr = win->res->start - win->offset;
|
||||
pp->io_base = pci_pio_to_address(win->res->start);
|
||||
break;
|
||||
case 0:
|
||||
dev_err(dev, "Missing *config* reg space\n");
|
||||
pp->cfg0_size = resource_size(win->res);
|
||||
pp->cfg0_base = win->res->start;
|
||||
if (!pci->dbi_base) {
|
||||
pci->dbi_base = devm_pci_remap_cfgspace(dev,
|
||||
pp->cfg0_base,
|
||||
pp->cfg0_size);
|
||||
if (!pci->dbi_base) {
|
||||
dev_err(dev, "Error with ioremap\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!pp->va_cfg0_base) {
|
||||
pp->va_cfg0_base = devm_pci_remap_cfgspace(dev,
|
||||
pp->cfg0_base, pp->cfg0_size);
|
||||
if (!pp->va_cfg0_base) {
|
||||
dev_err(dev, "Error with ioremap in function\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
/* Get the I/O range from DT */
|
||||
win = resource_list_first_type(&bridge->windows, IORESOURCE_IO);
|
||||
if (win) {
|
||||
pp->io_size = resource_size(win->res);
|
||||
pp->io_bus_addr = win->res->start - win->offset;
|
||||
pp->io_base = pci_pio_to_address(win->res->start);
|
||||
}
|
||||
|
||||
if (pci->link_gen < 1)
|
||||
@@ -425,7 +402,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
||||
dw_pcie_setup_rc(pp);
|
||||
dw_pcie_msi_init(pp);
|
||||
|
||||
if (!dw_pcie_link_up(pci) && pci->ops->start_link) {
|
||||
if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) {
|
||||
ret = pci->ops->start_link(pci);
|
||||
if (ret)
|
||||
goto err_free_msi;
|
||||
|
@@ -141,7 +141,7 @@ u32 dw_pcie_read_dbi(struct dw_pcie *pci, u32 reg, size_t size)
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
if (pci->ops->read_dbi)
|
||||
if (pci->ops && pci->ops->read_dbi)
|
||||
return pci->ops->read_dbi(pci, pci->dbi_base, reg, size);
|
||||
|
||||
ret = dw_pcie_read(pci->dbi_base + reg, size, &val);
|
||||
@@ -156,7 +156,7 @@ void dw_pcie_write_dbi(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (pci->ops->write_dbi) {
|
||||
if (pci->ops && pci->ops->write_dbi) {
|
||||
pci->ops->write_dbi(pci, pci->dbi_base, reg, size, val);
|
||||
return;
|
||||
}
|
||||
@@ -171,7 +171,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (pci->ops->write_dbi2) {
|
||||
if (pci->ops && pci->ops->write_dbi2) {
|
||||
pci->ops->write_dbi2(pci, pci->dbi_base2, reg, size, val);
|
||||
return;
|
||||
}
|
||||
@@ -186,7 +186,7 @@ static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg)
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
if (pci->ops->read_dbi)
|
||||
if (pci->ops && pci->ops->read_dbi)
|
||||
return pci->ops->read_dbi(pci, pci->atu_base, reg, 4);
|
||||
|
||||
ret = dw_pcie_read(pci->atu_base + reg, 4, &val);
|
||||
@@ -200,7 +200,7 @@ static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (pci->ops->write_dbi) {
|
||||
if (pci->ops && pci->ops->write_dbi) {
|
||||
pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val);
|
||||
return;
|
||||
}
|
||||
@@ -225,6 +225,47 @@ static void dw_pcie_writel_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg,
|
||||
dw_pcie_writel_atu(pci, offset + reg, val);
|
||||
}
|
||||
|
||||
static inline u32 dw_pcie_enable_ecrc(u32 val)
|
||||
{
|
||||
/*
|
||||
* DesignWare core version 4.90A has a design issue where the 'TD'
|
||||
* bit in the Control register-1 of the ATU outbound region acts
|
||||
* like an override for the ECRC setting, i.e., the presence of TLP
|
||||
* Digest (ECRC) in the outgoing TLPs is solely determined by this
|
||||
* bit. This is contrary to the PCIe spec which says that the
|
||||
* enablement of the ECRC is solely determined by the AER
|
||||
* registers.
|
||||
*
|
||||
* Because of this, even when the ECRC is enabled through AER
|
||||
* registers, the transactions going through ATU won't have TLP
|
||||
* Digest as there is no way the PCI core AER code could program
|
||||
* the TD bit which is specific to the DesignWare core.
|
||||
*
|
||||
* The best way to handle this scenario is to program the TD bit
|
||||
* always. It affects only the traffic from root port to downstream
|
||||
* devices.
|
||||
*
|
||||
* At this point,
|
||||
* When ECRC is enabled in AER registers, everything works normally
|
||||
* When ECRC is NOT enabled in AER registers, then,
|
||||
* on Root Port:- TLP Digest (DWord size) gets appended to each packet
|
||||
* even through it is not required. Since downstream
|
||||
* TLPs are mostly for configuration accesses and BAR
|
||||
* accesses, they are not in critical path and won't
|
||||
* have much negative effect on the performance.
|
||||
* on End Point:- TLP Digest is received for some/all the packets coming
|
||||
* from the root port. TLP Digest is ignored because,
|
||||
* as per the PCIe Spec r5.0 v1.0 section 2.2.3
|
||||
* "TLP Digest Rules", when an endpoint receives TLP
|
||||
* Digest when its ECRC check functionality is disabled
|
||||
* in AER registers, received TLP Digest is just ignored.
|
||||
* Since there is no issue or error reported either side, best way to
|
||||
* handle the scenario is to program TD bit by default.
|
||||
*/
|
||||
|
||||
return val | PCIE_ATU_TD;
|
||||
}
|
||||
|
||||
static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
|
||||
int index, int type,
|
||||
u64 cpu_addr, u64 pci_addr,
|
||||
@@ -248,6 +289,8 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
|
||||
val = type | PCIE_ATU_FUNC_NUM(func_no);
|
||||
val = upper_32_bits(size - 1) ?
|
||||
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
|
||||
if (pci->version == 0x490A)
|
||||
val = dw_pcie_enable_ecrc(val);
|
||||
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val);
|
||||
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
|
||||
PCIE_ATU_ENABLE);
|
||||
@@ -273,7 +316,7 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
|
||||
{
|
||||
u32 retries, val;
|
||||
|
||||
if (pci->ops->cpu_addr_fixup)
|
||||
if (pci->ops && pci->ops->cpu_addr_fixup)
|
||||
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
|
||||
|
||||
if (pci->iatu_unroll_enabled) {
|
||||
@@ -290,12 +333,19 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
|
||||
upper_32_bits(cpu_addr));
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
|
||||
lower_32_bits(cpu_addr + size - 1));
|
||||
if (pci->version >= 0x460A)
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_LIMIT,
|
||||
upper_32_bits(cpu_addr + size - 1));
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
|
||||
lower_32_bits(pci_addr));
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
|
||||
upper_32_bits(pci_addr));
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type |
|
||||
PCIE_ATU_FUNC_NUM(func_no));
|
||||
val = type | PCIE_ATU_FUNC_NUM(func_no);
|
||||
val = ((upper_32_bits(size - 1)) && (pci->version >= 0x460A)) ?
|
||||
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
|
||||
if (pci->version == 0x490A)
|
||||
val = dw_pcie_enable_ecrc(val);
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, val);
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
|
||||
|
||||
/*
|
||||
@@ -321,7 +371,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
|
||||
|
||||
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u64 pci_addr,
|
||||
u32 size)
|
||||
u64 size)
|
||||
{
|
||||
__dw_pcie_prog_outbound_atu(pci, func_no, index, type,
|
||||
cpu_addr, pci_addr, size);
|
||||
@@ -481,7 +531,7 @@ int dw_pcie_link_up(struct dw_pcie *pci)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
if (pci->ops->link_up)
|
||||
if (pci->ops && pci->ops->link_up)
|
||||
return pci->ops->link_up(pci);
|
||||
|
||||
val = readl(pci->dbi_base + PCIE_PORT_DEBUG1);
|
||||
|
@@ -86,6 +86,7 @@
|
||||
#define PCIE_ATU_TYPE_IO 0x2
|
||||
#define PCIE_ATU_TYPE_CFG0 0x4
|
||||
#define PCIE_ATU_TYPE_CFG1 0x5
|
||||
#define PCIE_ATU_TD BIT(8)
|
||||
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
|
||||
#define PCIE_ATU_CR2 0x908
|
||||
#define PCIE_ATU_ENABLE BIT(31)
|
||||
@@ -99,6 +100,7 @@
|
||||
#define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x)
|
||||
#define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x)
|
||||
#define PCIE_ATU_UPPER_TARGET 0x91C
|
||||
#define PCIE_ATU_UPPER_LIMIT 0x924
|
||||
|
||||
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
|
||||
#define PCIE_DBI_RO_WR_EN BIT(0)
|
||||
@@ -297,7 +299,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index,
|
||||
u64 size);
|
||||
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u64 pci_addr,
|
||||
u32 size);
|
||||
u64 size);
|
||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int bar, u64 cpu_addr,
|
||||
enum dw_pcie_as_type as_type);
|
||||
|
@@ -159,8 +159,10 @@ struct qcom_pcie_resources_2_3_3 {
|
||||
struct reset_control *rst[7];
|
||||
};
|
||||
|
||||
/* 6 clocks typically, 7 for sm8250 */
|
||||
struct qcom_pcie_resources_2_7_0 {
|
||||
struct clk_bulk_data clks[6];
|
||||
struct clk_bulk_data clks[7];
|
||||
int num_clks;
|
||||
struct regulator_bulk_data supplies[2];
|
||||
struct reset_control *pci_reset;
|
||||
struct clk *pipe_clk;
|
||||
@@ -398,7 +400,9 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
|
||||
|
||||
/* enable external reference clock */
|
||||
val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
||||
val &= ~PHY_REFCLK_USE_PAD;
|
||||
/* USE_PAD is required only for ipq806x */
|
||||
if (!of_device_is_compatible(node, "qcom,pcie-apq8064"))
|
||||
val &= ~PHY_REFCLK_USE_PAD;
|
||||
val |= PHY_REFCLK_SSP_EN;
|
||||
writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
||||
|
||||
@@ -1152,8 +1156,14 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
|
||||
res->clks[3].id = "bus_slave";
|
||||
res->clks[4].id = "slave_q2a";
|
||||
res->clks[5].id = "tbu";
|
||||
if (of_device_is_compatible(dev->of_node, "qcom,pcie-sm8250")) {
|
||||
res->clks[6].id = "ddrss_sf_tbu";
|
||||
res->num_clks = 7;
|
||||
} else {
|
||||
res->num_clks = 6;
|
||||
}
|
||||
|
||||
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
|
||||
ret = devm_clk_bulk_get(dev, res->num_clks, res->clks);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@@ -1175,7 +1185,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
|
||||
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
|
||||
if (ret < 0)
|
||||
goto err_disable_regulators;
|
||||
|
||||
@@ -1227,7 +1237,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
|
||||
|
||||
return 0;
|
||||
err_disable_clocks:
|
||||
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
|
||||
clk_bulk_disable_unprepare(res->num_clks, res->clks);
|
||||
err_disable_regulators:
|
||||
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||
|
||||
@@ -1238,7 +1248,7 @@ static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
|
||||
|
||||
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
|
||||
clk_bulk_disable_unprepare(res->num_clks, res->clks);
|
||||
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||
}
|
||||
|
||||
|
@@ -64,6 +64,8 @@ int pci_host_common_probe(struct platform_device *pdev)
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
||||
platform_set_drvdata(pdev, bridge);
|
||||
|
||||
of_pci_check_probe_only();
|
||||
|
||||
/* Parse and map our Configuration Space windows */
|
||||
@@ -78,8 +80,6 @@ int pci_host_common_probe(struct platform_device *pdev)
|
||||
bridge->sysdata = cfg;
|
||||
bridge->ops = (struct pci_ops *)&ops->pci_ops;
|
||||
|
||||
platform_set_drvdata(pdev, bridge);
|
||||
|
||||
return pci_host_probe(bridge);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_host_common_probe);
|
||||
|
@@ -1714,7 +1714,7 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
|
||||
* resumed and suspended again: see hibernation_snapshot() and
|
||||
* hibernation_platform_enter().
|
||||
*
|
||||
* If the memory enable bit is already set, Hyper-V sliently ignores
|
||||
* If the memory enable bit is already set, Hyper-V silently ignores
|
||||
* the below BAR updates, and the related PCI device driver can not
|
||||
* work, because reading from the device register(s) always returns
|
||||
* 0xFFFFFFFF.
|
||||
|
@@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
|
||||
if (!msi_group->gic_irq)
|
||||
continue;
|
||||
|
||||
irq_set_chained_handler(msi_group->gic_irq,
|
||||
xgene_msi_isr);
|
||||
err = irq_set_handler_data(msi_group->gic_irq, msi_group);
|
||||
if (err) {
|
||||
pr_err("failed to register GIC IRQ handler\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
irq_set_chained_handler_and_data(msi_group->gic_irq,
|
||||
xgene_msi_isr, msi_group);
|
||||
|
||||
/*
|
||||
* Statically allocate MSI GIC IRQs to each CPU core.
|
||||
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
||||
|
@@ -173,12 +173,13 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
|
||||
|
||||
/*
|
||||
* The v1 controller has a bug in its Configuration Request
|
||||
* Retry Status (CRS) logic: when CRS is enabled and we read the
|
||||
* Vendor and Device ID of a non-existent device, the controller
|
||||
* fabricates return data of 0xFFFF0001 ("device exists but is not
|
||||
* ready") instead of 0xFFFFFFFF ("device does not exist"). This
|
||||
* causes the PCI core to retry the read until it times out.
|
||||
* Avoid this by not claiming to support CRS.
|
||||
* Retry Status (CRS) logic: when CRS Software Visibility is
|
||||
* enabled and we read the Vendor and Device ID of a non-existent
|
||||
* device, the controller fabricates return data of 0xFFFF0001
|
||||
* ("device exists but is not ready") instead of 0xFFFFFFFF
|
||||
* ("device does not exist"). This causes the PCI core to retry
|
||||
* the read until it times out. Avoid this by not claiming to
|
||||
* support CRS SV.
|
||||
*/
|
||||
if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) &&
|
||||
((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL))
|
||||
|
@@ -204,8 +204,7 @@ static int altera_msi_remove(struct platform_device *pdev)
|
||||
struct altera_msi *msi = platform_get_drvdata(pdev);
|
||||
|
||||
msi_writel(msi, 0, MSI_INTMASK);
|
||||
irq_set_chained_handler(msi->irq, NULL);
|
||||
irq_set_handler_data(msi->irq, NULL);
|
||||
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
|
||||
|
||||
altera_free_domains(msi);
|
||||
|
||||
|
@@ -97,6 +97,7 @@
|
||||
|
||||
#define PCIE_MISC_REVISION 0x406c
|
||||
#define BRCM_PCIE_HW_REV_33 0x0303
|
||||
#define BRCM_PCIE_HW_REV_3_20 0x0320
|
||||
|
||||
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT 0x4070
|
||||
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT_LIMIT_MASK 0xfff00000
|
||||
@@ -187,6 +188,7 @@
|
||||
struct brcm_pcie;
|
||||
static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val);
|
||||
static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val);
|
||||
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val);
|
||||
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
|
||||
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
|
||||
|
||||
@@ -203,6 +205,7 @@ enum {
|
||||
|
||||
enum pcie_type {
|
||||
GENERIC,
|
||||
BCM4908,
|
||||
BCM7278,
|
||||
BCM2711,
|
||||
};
|
||||
@@ -227,6 +230,13 @@ static const struct pcie_cfg_data generic_cfg = {
|
||||
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
|
||||
};
|
||||
|
||||
static const struct pcie_cfg_data bcm4908_cfg = {
|
||||
.offsets = pcie_offsets,
|
||||
.type = BCM4908,
|
||||
.perst_set = brcm_pcie_perst_set_4908,
|
||||
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
|
||||
};
|
||||
|
||||
static const int pcie_offset_bcm7278[] = {
|
||||
[RGR1_SW_INIT_1] = 0xc010,
|
||||
[EXT_CFG_INDEX] = 0x9000,
|
||||
@@ -279,6 +289,7 @@ struct brcm_pcie {
|
||||
const int *reg_offsets;
|
||||
enum pcie_type type;
|
||||
struct reset_control *rescal;
|
||||
struct reset_control *perst_reset;
|
||||
int num_memc;
|
||||
u64 memc_size[PCIE_BRCM_MAX_MEMC];
|
||||
u32 hw_rev;
|
||||
@@ -603,8 +614,7 @@ static void brcm_msi_remove(struct brcm_pcie *pcie)
|
||||
|
||||
if (!msi)
|
||||
return;
|
||||
irq_set_chained_handler(msi->irq, NULL);
|
||||
irq_set_handler_data(msi->irq, NULL);
|
||||
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
|
||||
brcm_free_domains(msi);
|
||||
}
|
||||
|
||||
@@ -735,6 +745,17 @@ static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32
|
||||
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
if (WARN_ONCE(!pcie->perst_reset, "missing PERST# reset controller\n"))
|
||||
return;
|
||||
|
||||
if (val)
|
||||
reset_control_assert(pcie->perst_reset);
|
||||
else
|
||||
reset_control_deassert(pcie->perst_reset);
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
u32 tmp;
|
||||
@@ -1194,6 +1215,7 @@ static int brcm_pcie_remove(struct platform_device *pdev)
|
||||
|
||||
static const struct of_device_id brcm_pcie_match[] = {
|
||||
{ .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg },
|
||||
{ .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg },
|
||||
{ .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg },
|
||||
{ .compatible = "brcm,bcm7278-pcie", .data = &bcm7278_cfg },
|
||||
{ .compatible = "brcm,bcm7216-pcie", .data = &bcm7278_cfg },
|
||||
@@ -1250,6 +1272,11 @@ static int brcm_pcie_probe(struct platform_device *pdev)
|
||||
clk_disable_unprepare(pcie->clk);
|
||||
return PTR_ERR(pcie->rescal);
|
||||
}
|
||||
pcie->perst_reset = devm_reset_control_get_optional_exclusive(&pdev->dev, "perst");
|
||||
if (IS_ERR(pcie->perst_reset)) {
|
||||
clk_disable_unprepare(pcie->clk);
|
||||
return PTR_ERR(pcie->perst_reset);
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pcie->rescal);
|
||||
if (ret)
|
||||
@@ -1267,6 +1294,10 @@ static int brcm_pcie_probe(struct platform_device *pdev)
|
||||
goto fail;
|
||||
|
||||
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
|
||||
if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
|
||||
dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
|
||||
if (pci_msi_enabled() && msi_np == pcie->np) {
|
||||
|
@@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
||||
err = of_pci_get_devfn(child);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse devfn: %d\n", err);
|
||||
return err;
|
||||
goto error_put_node;
|
||||
}
|
||||
|
||||
slot = PCI_SLOT(err);
|
||||
|
||||
err = mtk_pcie_parse_port(pcie, child, slot);
|
||||
if (err)
|
||||
return err;
|
||||
goto error_put_node;
|
||||
}
|
||||
|
||||
err = mtk_pcie_subsys_powerup(pcie);
|
||||
@@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
||||
mtk_pcie_subsys_powerdown(pcie);
|
||||
|
||||
return 0;
|
||||
error_put_node:
|
||||
of_node_put(child);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int mtk_pcie_probe(struct platform_device *pdev)
|
||||
|
1138
drivers/pci/controller/pcie-microchip-host.c
Normal file
1138
drivers/pci/controller/pcie-microchip-host.c
Normal file
File diff suppressed because it is too large
Load Diff
@@ -735,7 +735,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
|
||||
}
|
||||
|
||||
/* setup MSI data target */
|
||||
msi->pages = __get_free_pages(GFP_KERNEL, 0);
|
||||
msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0);
|
||||
rcar_pcie_hw_enable_msi(host);
|
||||
|
||||
return 0;
|
||||
|
@@ -82,7 +82,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
||||
}
|
||||
|
||||
rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
|
||||
"mgmt-sticky");
|
||||
"mgmt-sticky");
|
||||
if (IS_ERR(rockchip->mgmt_sticky_rst)) {
|
||||
if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing mgmt-sticky reset property in node\n");
|
||||
@@ -118,11 +118,11 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
||||
}
|
||||
|
||||
if (rockchip->is_rc) {
|
||||
rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(rockchip->ep_gpio)) {
|
||||
dev_err(dev, "missing ep-gpios property in node\n");
|
||||
return PTR_ERR(rockchip->ep_gpio);
|
||||
}
|
||||
rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
|
||||
GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(rockchip->ep_gpio))
|
||||
return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
|
||||
"failed to get ep GPIO\n");
|
||||
}
|
||||
|
||||
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
|
||||
|
@@ -1,341 +0,0 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_address.h>
|
||||
|
||||
#define MSI_MAX 256
|
||||
|
||||
#define SMP8759_MUX 0x48
|
||||
#define SMP8759_TEST_OUT 0x74
|
||||
#define SMP8759_DOORBELL 0x7c
|
||||
#define SMP8759_STATUS 0x80
|
||||
#define SMP8759_ENABLE 0xa0
|
||||
|
||||
struct tango_pcie {
|
||||
DECLARE_BITMAP(used_msi, MSI_MAX);
|
||||
u64 msi_doorbell;
|
||||
spinlock_t used_msi_lock;
|
||||
void __iomem *base;
|
||||
struct irq_domain *dom;
|
||||
};
|
||||
|
||||
static void tango_msi_isr(struct irq_desc *desc)
|
||||
{
|
||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
struct tango_pcie *pcie = irq_desc_get_handler_data(desc);
|
||||
unsigned long status, base, virq, idx, pos = 0;
|
||||
|
||||
chained_irq_enter(chip, desc);
|
||||
spin_lock(&pcie->used_msi_lock);
|
||||
|
||||
while ((pos = find_next_bit(pcie->used_msi, MSI_MAX, pos)) < MSI_MAX) {
|
||||
base = round_down(pos, 32);
|
||||
status = readl_relaxed(pcie->base + SMP8759_STATUS + base / 8);
|
||||
for_each_set_bit(idx, &status, 32) {
|
||||
virq = irq_find_mapping(pcie->dom, base + idx);
|
||||
generic_handle_irq(virq);
|
||||
}
|
||||
pos = base + 32;
|
||||
}
|
||||
|
||||
spin_unlock(&pcie->used_msi_lock);
|
||||
chained_irq_exit(chip, desc);
|
||||
}
|
||||
|
||||
static void tango_ack(struct irq_data *d)
|
||||
{
|
||||
struct tango_pcie *pcie = d->chip_data;
|
||||
u32 offset = (d->hwirq / 32) * 4;
|
||||
u32 bit = BIT(d->hwirq % 32);
|
||||
|
||||
writel_relaxed(bit, pcie->base + SMP8759_STATUS + offset);
|
||||
}
|
||||
|
||||
static void update_msi_enable(struct irq_data *d, bool unmask)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct tango_pcie *pcie = d->chip_data;
|
||||
u32 offset = (d->hwirq / 32) * 4;
|
||||
u32 bit = BIT(d->hwirq % 32);
|
||||
u32 val;
|
||||
|
||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
||||
val = readl_relaxed(pcie->base + SMP8759_ENABLE + offset);
|
||||
val = unmask ? val | bit : val & ~bit;
|
||||
writel_relaxed(val, pcie->base + SMP8759_ENABLE + offset);
|
||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
||||
}
|
||||
|
||||
static void tango_mask(struct irq_data *d)
|
||||
{
|
||||
update_msi_enable(d, false);
|
||||
}
|
||||
|
||||
static void tango_unmask(struct irq_data *d)
|
||||
{
|
||||
update_msi_enable(d, true);
|
||||
}
|
||||
|
||||
static int tango_set_affinity(struct irq_data *d, const struct cpumask *mask,
|
||||
bool force)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void tango_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
|
||||
{
|
||||
struct tango_pcie *pcie = d->chip_data;
|
||||
msg->address_lo = lower_32_bits(pcie->msi_doorbell);
|
||||
msg->address_hi = upper_32_bits(pcie->msi_doorbell);
|
||||
msg->data = d->hwirq;
|
||||
}
|
||||
|
||||
static struct irq_chip tango_chip = {
|
||||
.irq_ack = tango_ack,
|
||||
.irq_mask = tango_mask,
|
||||
.irq_unmask = tango_unmask,
|
||||
.irq_set_affinity = tango_set_affinity,
|
||||
.irq_compose_msi_msg = tango_compose_msi_msg,
|
||||
};
|
||||
|
||||
static void msi_ack(struct irq_data *d)
|
||||
{
|
||||
irq_chip_ack_parent(d);
|
||||
}
|
||||
|
||||
static void msi_mask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_mask_irq(d);
|
||||
irq_chip_mask_parent(d);
|
||||
}
|
||||
|
||||
static void msi_unmask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_unmask_irq(d);
|
||||
irq_chip_unmask_parent(d);
|
||||
}
|
||||
|
||||
static struct irq_chip msi_chip = {
|
||||
.name = "MSI",
|
||||
.irq_ack = msi_ack,
|
||||
.irq_mask = msi_mask,
|
||||
.irq_unmask = msi_unmask,
|
||||
};
|
||||
|
||||
static struct msi_domain_info msi_dom_info = {
|
||||
.flags = MSI_FLAG_PCI_MSIX
|
||||
| MSI_FLAG_USE_DEF_DOM_OPS
|
||||
| MSI_FLAG_USE_DEF_CHIP_OPS,
|
||||
.chip = &msi_chip,
|
||||
};
|
||||
|
||||
static int tango_irq_domain_alloc(struct irq_domain *dom, unsigned int virq,
|
||||
unsigned int nr_irqs, void *args)
|
||||
{
|
||||
struct tango_pcie *pcie = dom->host_data;
|
||||
unsigned long flags;
|
||||
int pos;
|
||||
|
||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
||||
pos = find_first_zero_bit(pcie->used_msi, MSI_MAX);
|
||||
if (pos >= MSI_MAX) {
|
||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
||||
return -ENOSPC;
|
||||
}
|
||||
__set_bit(pos, pcie->used_msi);
|
||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
||||
irq_domain_set_info(dom, virq, pos, &tango_chip,
|
||||
pcie, handle_edge_irq, NULL, NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tango_irq_domain_free(struct irq_domain *dom, unsigned int virq,
|
||||
unsigned int nr_irqs)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct irq_data *d = irq_domain_get_irq_data(dom, virq);
|
||||
struct tango_pcie *pcie = d->chip_data;
|
||||
|
||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
||||
__clear_bit(d->hwirq, pcie->used_msi);
|
||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
||||
}
|
||||
|
||||
static const struct irq_domain_ops dom_ops = {
|
||||
.alloc = tango_irq_domain_alloc,
|
||||
.free = tango_irq_domain_free,
|
||||
};
|
||||
|
||||
static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
struct pci_config_window *cfg = bus->sysdata;
|
||||
struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
|
||||
int ret;
|
||||
|
||||
/* Reads in configuration space outside devfn 0 return garbage */
|
||||
if (devfn != 0)
|
||||
return PCIBIOS_FUNC_NOT_SUPPORTED;
|
||||
|
||||
/*
|
||||
* PCI config and MMIO accesses are muxed. Linux doesn't have a
|
||||
* mutual exclusion mechanism for config vs. MMIO accesses, so
|
||||
* concurrent accesses may cause corruption.
|
||||
*/
|
||||
writel_relaxed(1, pcie->base + SMP8759_MUX);
|
||||
ret = pci_generic_config_read(bus, devfn, where, size, val);
|
||||
writel_relaxed(0, pcie->base + SMP8759_MUX);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 val)
|
||||
{
|
||||
struct pci_config_window *cfg = bus->sysdata;
|
||||
struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
|
||||
int ret;
|
||||
|
||||
writel_relaxed(1, pcie->base + SMP8759_MUX);
|
||||
ret = pci_generic_config_write(bus, devfn, where, size, val);
|
||||
writel_relaxed(0, pcie->base + SMP8759_MUX);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct pci_ecam_ops smp8759_ecam_ops = {
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
.read = smp8759_config_read,
|
||||
.write = smp8759_config_write,
|
||||
}
|
||||
};
|
||||
|
||||
static int tango_pcie_link_up(struct tango_pcie *pcie)
|
||||
{
|
||||
void __iomem *test_out = pcie->base + SMP8759_TEST_OUT;
|
||||
int i;
|
||||
|
||||
writel_relaxed(16, test_out);
|
||||
for (i = 0; i < 10; ++i) {
|
||||
u32 ltssm_state = readl_relaxed(test_out) >> 8;
|
||||
if ((ltssm_state & 0x1f) == 0xf) /* L0 */
|
||||
return 1;
|
||||
usleep_range(3000, 4000);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tango_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct tango_pcie *pcie;
|
||||
struct resource *res;
|
||||
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
|
||||
struct irq_domain *msi_dom, *irq_dom;
|
||||
struct of_pci_range_parser parser;
|
||||
struct of_pci_range range;
|
||||
int virq, offset;
|
||||
|
||||
dev_warn(dev, "simultaneous PCI config and MMIO accesses may cause data corruption\n");
|
||||
add_taint(TAINT_CRAP, LOCKDEP_STILL_OK);
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
if (!pcie)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
pcie->base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(pcie->base))
|
||||
return PTR_ERR(pcie->base);
|
||||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
if (!tango_pcie_link_up(pcie))
|
||||
return -ENODEV;
|
||||
|
||||
if (of_pci_dma_range_parser_init(&parser, dev->of_node) < 0)
|
||||
return -ENOENT;
|
||||
|
||||
if (of_pci_range_parser_one(&parser, &range) == NULL)
|
||||
return -ENOENT;
|
||||
|
||||
range.pci_addr += range.size;
|
||||
pcie->msi_doorbell = range.pci_addr + res->start + SMP8759_DOORBELL;
|
||||
|
||||
for (offset = 0; offset < MSI_MAX / 8; offset += 4)
|
||||
writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset);
|
||||
|
||||
virq = platform_get_irq(pdev, 1);
|
||||
if (virq < 0)
|
||||
return virq;
|
||||
|
||||
irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie);
|
||||
if (!irq_dom) {
|
||||
dev_err(dev, "Failed to create IRQ domain\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
msi_dom = pci_msi_create_irq_domain(fwnode, &msi_dom_info, irq_dom);
|
||||
if (!msi_dom) {
|
||||
dev_err(dev, "Failed to create MSI domain\n");
|
||||
irq_domain_remove(irq_dom);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
pcie->dom = irq_dom;
|
||||
spin_lock_init(&pcie->used_msi_lock);
|
||||
irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie);
|
||||
|
||||
return pci_host_common_probe(pdev);
|
||||
}
|
||||
|
||||
static const struct of_device_id tango_pcie_ids[] = {
|
||||
{
|
||||
.compatible = "sigma,smp8759-pcie",
|
||||
.data = &smp8759_ecam_ops,
|
||||
},
|
||||
{ },
|
||||
};
|
||||
|
||||
static struct platform_driver tango_pcie_driver = {
|
||||
.probe = tango_pcie_probe,
|
||||
.driver = {
|
||||
.name = KBUILD_MODNAME,
|
||||
.of_match_table = tango_pcie_ids,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
builtin_platform_driver(tango_pcie_driver);
|
||||
|
||||
/*
|
||||
* The root complex advertises the wrong device class.
|
||||
* Header Type 1 is for PCI-to-PCI bridges.
|
||||
*/
|
||||
static void tango_fixup_class(struct pci_dev *dev)
|
||||
{
|
||||
dev->class = PCI_CLASS_BRIDGE_PCI << 8;
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_class);
|
||||
|
||||
/*
|
||||
* The root complex exposes a "fake" BAR, which is used to filter
|
||||
* bus-to-system accesses. Only accesses within the range defined by this
|
||||
* BAR are forwarded to the host, others are ignored.
|
||||
*
|
||||
* By default, the DMA framework expects an identity mapping, and DRAM0 is
|
||||
* mapped at 0x80000000.
|
||||
*/
|
||||
static void tango_fixup_bar(struct pci_dev *dev)
|
||||
{
|
||||
dev->non_compliant_bars = true;
|
||||
pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x80000000);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_bar);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_bar);
|
@@ -404,6 +404,7 @@ static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port)
|
||||
return 0;
|
||||
out:
|
||||
xilinx_cpm_free_irq_domains(port);
|
||||
of_node_put(pcie_intc_node);
|
||||
dev_err(dev, "Failed to allocate IRQ domains\n");
|
||||
|
||||
return -ENOMEM;
|
||||
|
@@ -12,3 +12,16 @@ config PCI_EPF_TEST
|
||||
for PCI Endpoint.
|
||||
|
||||
If in doubt, say "N" to disable Endpoint test driver.
|
||||
|
||||
config PCI_EPF_NTB
|
||||
tristate "PCI Endpoint NTB driver"
|
||||
depends on PCI_ENDPOINT
|
||||
select CONFIGFS_FS
|
||||
help
|
||||
Select this configuration option to enable the Non-Transparent
|
||||
Bridge (NTB) driver for PCI Endpoint. NTB driver implements NTB
|
||||
controller functionality using multiple PCIe endpoint instances.
|
||||
It can support NTB endpoint function devices created using
|
||||
device tree.
|
||||
|
||||
If in doubt, say "N" to disable Endpoint NTB driver.
|
||||
|
@@ -4,3 +4,4 @@
|
||||
#
|
||||
|
||||
obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
|
||||
obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o
|
||||
|
2128
drivers/pci/endpoint/functions/pci-epf-ntb.c
Normal file
2128
drivers/pci/endpoint/functions/pci-epf-ntb.c
Normal file
File diff suppressed because it is too large
Load Diff
@@ -619,7 +619,8 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
|
||||
|
||||
if (epf_test->reg[bar]) {
|
||||
pci_epc_clear_bar(epc, epf->func_no, epf_bar);
|
||||
pci_epf_free_space(epf, epf_test->reg[bar], bar);
|
||||
pci_epf_free_space(epf, epf_test->reg[bar], bar,
|
||||
PRIMARY_INTERFACE);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -651,7 +652,8 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
|
||||
|
||||
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
|
||||
if (ret) {
|
||||
pci_epf_free_space(epf, epf_test->reg[bar], bar);
|
||||
pci_epf_free_space(epf, epf_test->reg[bar], bar,
|
||||
PRIMARY_INTERFACE);
|
||||
dev_err(dev, "Failed to set BAR%d\n", bar);
|
||||
if (bar == test_reg_bar)
|
||||
return ret;
|
||||
@@ -771,7 +773,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
||||
}
|
||||
|
||||
base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar,
|
||||
epc_features->align);
|
||||
epc_features->align, PRIMARY_INTERFACE);
|
||||
if (!base) {
|
||||
dev_err(dev, "Failed to allocated register space\n");
|
||||
return -ENOMEM;
|
||||
@@ -789,7 +791,8 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
||||
continue;
|
||||
|
||||
base = pci_epf_alloc_space(epf, bar_size[bar], bar,
|
||||
epc_features->align);
|
||||
epc_features->align,
|
||||
PRIMARY_INTERFACE);
|
||||
if (!base)
|
||||
dev_err(dev, "Failed to allocate space for BAR%d\n",
|
||||
bar);
|
||||
@@ -834,6 +837,8 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
||||
linkup_notifier = epc_features->linkup_notifier;
|
||||
core_init_notifier = epc_features->core_init_notifier;
|
||||
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
|
||||
if (test_reg_bar < 0)
|
||||
return -EINVAL;
|
||||
pci_epf_configure_bar(epf, epc_features);
|
||||
}
|
||||
|
||||
|
@@ -21,6 +21,9 @@ static struct config_group *controllers_group;
|
||||
|
||||
struct pci_epf_group {
|
||||
struct config_group group;
|
||||
struct config_group primary_epc_group;
|
||||
struct config_group secondary_epc_group;
|
||||
struct delayed_work cfs_work;
|
||||
struct pci_epf *epf;
|
||||
int index;
|
||||
};
|
||||
@@ -41,6 +44,127 @@ static inline struct pci_epc_group *to_pci_epc_group(struct config_item *item)
|
||||
return container_of(to_config_group(item), struct pci_epc_group, group);
|
||||
}
|
||||
|
||||
static int pci_secondary_epc_epf_link(struct config_item *epf_item,
|
||||
struct config_item *epc_item)
|
||||
{
|
||||
int ret;
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||
struct pci_epc *epc = epc_group->epc;
|
||||
struct pci_epf *epf = epf_group->epf;
|
||||
|
||||
ret = pci_epc_add_epf(epc, epf, SECONDARY_INTERFACE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_epf_bind(epf);
|
||||
if (ret) {
|
||||
pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
|
||||
struct config_item *epf_item)
|
||||
{
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||
struct pci_epc *epc;
|
||||
struct pci_epf *epf;
|
||||
|
||||
WARN_ON_ONCE(epc_group->start);
|
||||
|
||||
epc = epc_group->epc;
|
||||
epf = epf_group->epf;
|
||||
pci_epf_unbind(epf);
|
||||
pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
|
||||
}
|
||||
|
||||
static struct configfs_item_operations pci_secondary_epc_item_ops = {
|
||||
.allow_link = pci_secondary_epc_epf_link,
|
||||
.drop_link = pci_secondary_epc_epf_unlink,
|
||||
};
|
||||
|
||||
static const struct config_item_type pci_secondary_epc_type = {
|
||||
.ct_item_ops = &pci_secondary_epc_item_ops,
|
||||
.ct_owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static struct config_group
|
||||
*pci_ep_cfs_add_secondary_group(struct pci_epf_group *epf_group)
|
||||
{
|
||||
struct config_group *secondary_epc_group;
|
||||
|
||||
secondary_epc_group = &epf_group->secondary_epc_group;
|
||||
config_group_init_type_name(secondary_epc_group, "secondary",
|
||||
&pci_secondary_epc_type);
|
||||
configfs_register_group(&epf_group->group, secondary_epc_group);
|
||||
|
||||
return secondary_epc_group;
|
||||
}
|
||||
|
||||
static int pci_primary_epc_epf_link(struct config_item *epf_item,
|
||||
struct config_item *epc_item)
|
||||
{
|
||||
int ret;
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||
struct pci_epc *epc = epc_group->epc;
|
||||
struct pci_epf *epf = epf_group->epf;
|
||||
|
||||
ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_epf_bind(epf);
|
||||
if (ret) {
|
||||
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
|
||||
struct config_item *epf_item)
|
||||
{
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||
struct pci_epc *epc;
|
||||
struct pci_epf *epf;
|
||||
|
||||
WARN_ON_ONCE(epc_group->start);
|
||||
|
||||
epc = epc_group->epc;
|
||||
epf = epf_group->epf;
|
||||
pci_epf_unbind(epf);
|
||||
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
}
|
||||
|
||||
static struct configfs_item_operations pci_primary_epc_item_ops = {
|
||||
.allow_link = pci_primary_epc_epf_link,
|
||||
.drop_link = pci_primary_epc_epf_unlink,
|
||||
};
|
||||
|
||||
static const struct config_item_type pci_primary_epc_type = {
|
||||
.ct_item_ops = &pci_primary_epc_item_ops,
|
||||
.ct_owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static struct config_group
|
||||
*pci_ep_cfs_add_primary_group(struct pci_epf_group *epf_group)
|
||||
{
|
||||
struct config_group *primary_epc_group = &epf_group->primary_epc_group;
|
||||
|
||||
config_group_init_type_name(primary_epc_group, "primary",
|
||||
&pci_primary_epc_type);
|
||||
configfs_register_group(&epf_group->group, primary_epc_group);
|
||||
|
||||
return primary_epc_group;
|
||||
}
|
||||
|
||||
static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
|
||||
size_t len)
|
||||
{
|
||||
@@ -94,13 +218,13 @@ static int pci_epc_epf_link(struct config_item *epc_item,
|
||||
struct pci_epc *epc = epc_group->epc;
|
||||
struct pci_epf *epf = epf_group->epf;
|
||||
|
||||
ret = pci_epc_add_epf(epc, epf);
|
||||
ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_epf_bind(epf);
|
||||
if (ret) {
|
||||
pci_epc_remove_epf(epc, epf);
|
||||
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -120,7 +244,7 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
|
||||
epc = epc_group->epc;
|
||||
epf = epf_group->epf;
|
||||
pci_epf_unbind(epf);
|
||||
pci_epc_remove_epf(epc, epf);
|
||||
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||
}
|
||||
|
||||
static struct configfs_item_operations pci_epc_item_ops = {
|
||||
@@ -366,12 +490,53 @@ static struct configfs_item_operations pci_epf_ops = {
|
||||
.release = pci_epf_release,
|
||||
};
|
||||
|
||||
static struct config_group *pci_epf_type_make(struct config_group *group,
|
||||
const char *name)
|
||||
{
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item);
|
||||
struct config_group *epf_type_group;
|
||||
|
||||
epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group);
|
||||
return epf_type_group;
|
||||
}
|
||||
|
||||
static void pci_epf_type_drop(struct config_group *group,
|
||||
struct config_item *item)
|
||||
{
|
||||
config_item_put(item);
|
||||
}
|
||||
|
||||
static struct configfs_group_operations pci_epf_type_group_ops = {
|
||||
.make_group = &pci_epf_type_make,
|
||||
.drop_item = &pci_epf_type_drop,
|
||||
};
|
||||
|
||||
static const struct config_item_type pci_epf_type = {
|
||||
.ct_group_ops = &pci_epf_type_group_ops,
|
||||
.ct_item_ops = &pci_epf_ops,
|
||||
.ct_attrs = pci_epf_attrs,
|
||||
.ct_owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static void pci_epf_cfs_work(struct work_struct *work)
|
||||
{
|
||||
struct pci_epf_group *epf_group;
|
||||
struct config_group *group;
|
||||
|
||||
epf_group = container_of(work, struct pci_epf_group, cfs_work.work);
|
||||
group = pci_ep_cfs_add_primary_group(epf_group);
|
||||
if (IS_ERR(group)) {
|
||||
pr_err("failed to create 'primary' EPC interface\n");
|
||||
return;
|
||||
}
|
||||
|
||||
group = pci_ep_cfs_add_secondary_group(epf_group);
|
||||
if (IS_ERR(group)) {
|
||||
pr_err("failed to create 'secondary' EPC interface\n");
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
static struct config_group *pci_epf_make(struct config_group *group,
|
||||
const char *name)
|
||||
{
|
||||
@@ -410,10 +575,15 @@ static struct config_group *pci_epf_make(struct config_group *group,
|
||||
goto free_name;
|
||||
}
|
||||
|
||||
epf->group = &epf_group->group;
|
||||
epf_group->epf = epf;
|
||||
|
||||
kfree(epf_name);
|
||||
|
||||
INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
|
||||
queue_delayed_work(system_wq, &epf_group->cfs_work,
|
||||
msecs_to_jiffies(1));
|
||||
|
||||
return &epf_group->group;
|
||||
|
||||
free_name:
|
||||
|
@@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
|
||||
* pci_epc_get_first_free_bar() - helper to get first unreserved BAR
|
||||
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
|
||||
*
|
||||
* Invoke to get the first unreserved BAR that can be used for endpoint
|
||||
* Invoke to get the first unreserved BAR that can be used by the endpoint
|
||||
* function. For any incorrect value in reserved_bar return '0'.
|
||||
*/
|
||||
unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
|
||||
*epc_features)
|
||||
enum pci_barno
|
||||
pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
|
||||
{
|
||||
int free_bar;
|
||||
return pci_epc_get_next_free_bar(epc_features, BAR_0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
|
||||
|
||||
/**
|
||||
* pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
|
||||
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
|
||||
* @bar: the starting BAR number from where unreserved BAR should be searched
|
||||
*
|
||||
* Invoke to get the next unreserved BAR starting from @bar that can be used
|
||||
* for endpoint function. For any incorrect value in reserved_bar return '0'.
|
||||
*/
|
||||
enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
|
||||
*epc_features, enum pci_barno bar)
|
||||
{
|
||||
unsigned long free_bar;
|
||||
|
||||
if (!epc_features)
|
||||
return 0;
|
||||
return BAR_0;
|
||||
|
||||
free_bar = ffz(epc_features->reserved_bar);
|
||||
/* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
|
||||
if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
|
||||
bar++;
|
||||
|
||||
/* Find if the reserved BAR is also a 64-bit BAR */
|
||||
free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
|
||||
|
||||
/* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
|
||||
free_bar <<= 1;
|
||||
free_bar |= epc_features->reserved_bar;
|
||||
|
||||
free_bar = find_next_zero_bit(&free_bar, 6, bar);
|
||||
if (free_bar > 5)
|
||||
return 0;
|
||||
return NO_BAR;
|
||||
|
||||
return free_bar;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
|
||||
EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
|
||||
|
||||
/**
|
||||
* pci_epc_get_features() - get the features supported by EPC
|
||||
@@ -204,6 +230,47 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
|
||||
|
||||
/**
|
||||
* pci_epc_map_msi_irq() - Map physical address to MSI address and return
|
||||
* MSI data
|
||||
* @epc: the EPC device which has the MSI capability
|
||||
* @func_no: the physical endpoint function number in the EPC device
|
||||
* @phys_addr: the physical address of the outbound region
|
||||
* @interrupt_num: the MSI interrupt number
|
||||
* @entry_size: Size of Outbound address region for each interrupt
|
||||
* @msi_data: the data that should be written in order to raise MSI interrupt
|
||||
* with interrupt number as 'interrupt num'
|
||||
* @msi_addr_offset: Offset of MSI address from the aligned outbound address
|
||||
* to which the MSI address is mapped
|
||||
*
|
||||
* Invoke to map physical address to MSI address and return MSI data. The
|
||||
* physical address should be an address in the outbound region. This is
|
||||
* required to implement doorbell functionality of NTB wherein EPC on either
|
||||
* side of the interface (primary and secondary) can directly write to the
|
||||
* physical address (in outbound region) of the other interface to ring
|
||||
* doorbell.
|
||||
*/
|
||||
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr,
|
||||
u8 interrupt_num, u32 entry_size, u32 *msi_data,
|
||||
u32 *msi_addr_offset)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc))
|
||||
return -EINVAL;
|
||||
|
||||
if (!epc->ops->map_msi_irq)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num,
|
||||
entry_size, msi_data, msi_addr_offset);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_map_msi_irq);
|
||||
|
||||
/**
|
||||
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
|
||||
* @epc: the EPC device to which MSI interrupts was requested
|
||||
@@ -467,21 +534,28 @@ EXPORT_SYMBOL_GPL(pci_epc_write_header);
|
||||
* pci_epc_add_epf() - bind PCI endpoint function to an endpoint controller
|
||||
* @epc: the EPC device to which the endpoint function should be added
|
||||
* @epf: the endpoint function to be added
|
||||
* @type: Identifies if the EPC is connected to the primary or secondary
|
||||
* interface of EPF
|
||||
*
|
||||
* A PCI endpoint device can have one or more functions. In the case of PCIe,
|
||||
* the specification allows up to 8 PCIe endpoint functions. Invoke
|
||||
* pci_epc_add_epf() to add a PCI endpoint function to an endpoint controller.
|
||||
*/
|
||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||
enum pci_epc_interface_type type)
|
||||
{
|
||||
struct list_head *list;
|
||||
u32 func_no;
|
||||
int ret = 0;
|
||||
|
||||
if (epf->epc)
|
||||
if (IS_ERR_OR_NULL(epc))
|
||||
return -EINVAL;
|
||||
|
||||
if (type == PRIMARY_INTERFACE && epf->epc)
|
||||
return -EBUSY;
|
||||
|
||||
if (IS_ERR(epc))
|
||||
return -EINVAL;
|
||||
if (type == SECONDARY_INTERFACE && epf->sec_epc)
|
||||
return -EBUSY;
|
||||
|
||||
mutex_lock(&epc->lock);
|
||||
func_no = find_first_zero_bit(&epc->function_num_map,
|
||||
@@ -498,11 +572,17 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
||||
}
|
||||
|
||||
set_bit(func_no, &epc->function_num_map);
|
||||
epf->func_no = func_no;
|
||||
epf->epc = epc;
|
||||
|
||||
list_add_tail(&epf->list, &epc->pci_epf);
|
||||
if (type == PRIMARY_INTERFACE) {
|
||||
epf->func_no = func_no;
|
||||
epf->epc = epc;
|
||||
list = &epf->list;
|
||||
} else {
|
||||
epf->sec_epc_func_no = func_no;
|
||||
epf->sec_epc = epc;
|
||||
list = &epf->sec_epc_list;
|
||||
}
|
||||
|
||||
list_add_tail(list, &epc->pci_epf);
|
||||
ret:
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
@@ -517,14 +597,26 @@ EXPORT_SYMBOL_GPL(pci_epc_add_epf);
|
||||
*
|
||||
* Invoke to remove PCI endpoint function from the endpoint controller.
|
||||
*/
|
||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf)
|
||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||
enum pci_epc_interface_type type)
|
||||
{
|
||||
struct list_head *list;
|
||||
u32 func_no = 0;
|
||||
|
||||
if (!epc || IS_ERR(epc) || !epf)
|
||||
return;
|
||||
|
||||
if (type == PRIMARY_INTERFACE) {
|
||||
func_no = epf->func_no;
|
||||
list = &epf->list;
|
||||
} else {
|
||||
func_no = epf->sec_epc_func_no;
|
||||
list = &epf->sec_epc_list;
|
||||
}
|
||||
|
||||
mutex_lock(&epc->lock);
|
||||
clear_bit(epf->func_no, &epc->function_num_map);
|
||||
list_del(&epf->list);
|
||||
clear_bit(func_no, &epc->function_num_map);
|
||||
list_del(list);
|
||||
epf->epc = NULL;
|
||||
mutex_unlock(&epc->lock);
|
||||
}
|
||||
|
@@ -20,6 +20,38 @@ static DEFINE_MUTEX(pci_epf_mutex);
|
||||
static struct bus_type pci_epf_bus_type;
|
||||
static const struct device_type pci_epf_type;
|
||||
|
||||
/**
|
||||
* pci_epf_type_add_cfs() - Help function drivers to expose function specific
|
||||
* attributes in configfs
|
||||
* @epf: the EPF device that has to be configured using configfs
|
||||
* @group: the parent configfs group (corresponding to entries in
|
||||
* pci_epf_device_id)
|
||||
*
|
||||
* Invoke to expose function specific attributes in configfs. If the function
|
||||
* driver does not have anything to expose (attributes configured by user),
|
||||
* return NULL.
|
||||
*/
|
||||
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
|
||||
struct config_group *group)
|
||||
{
|
||||
struct config_group *epf_type_group;
|
||||
|
||||
if (!epf->driver) {
|
||||
dev_err(&epf->dev, "epf device not bound to driver\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (!epf->driver->ops->add_cfs)
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&epf->lock);
|
||||
epf_type_group = epf->driver->ops->add_cfs(epf, group);
|
||||
mutex_unlock(&epf->lock);
|
||||
|
||||
return epf_type_group;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
|
||||
|
||||
/**
|
||||
* pci_epf_unbind() - Notify the function driver that the binding between the
|
||||
* EPF device and EPC device has been lost
|
||||
@@ -74,24 +106,37 @@ EXPORT_SYMBOL_GPL(pci_epf_bind);
|
||||
* @epf: the EPF device from whom to free the memory
|
||||
* @addr: the virtual address of the PCI EPF register space
|
||||
* @bar: the BAR number corresponding to the register space
|
||||
* @type: Identifies if the allocated space is for primary EPC or secondary EPC
|
||||
*
|
||||
* Invoke to free the allocated PCI EPF register space.
|
||||
*/
|
||||
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar)
|
||||
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
|
||||
enum pci_epc_interface_type type)
|
||||
{
|
||||
struct device *dev = epf->epc->dev.parent;
|
||||
struct pci_epf_bar *epf_bar;
|
||||
struct pci_epc *epc;
|
||||
|
||||
if (!addr)
|
||||
return;
|
||||
|
||||
dma_free_coherent(dev, epf->bar[bar].size, addr,
|
||||
epf->bar[bar].phys_addr);
|
||||
if (type == PRIMARY_INTERFACE) {
|
||||
epc = epf->epc;
|
||||
epf_bar = epf->bar;
|
||||
} else {
|
||||
epc = epf->sec_epc;
|
||||
epf_bar = epf->sec_epc_bar;
|
||||
}
|
||||
|
||||
epf->bar[bar].phys_addr = 0;
|
||||
epf->bar[bar].addr = NULL;
|
||||
epf->bar[bar].size = 0;
|
||||
epf->bar[bar].barno = 0;
|
||||
epf->bar[bar].flags = 0;
|
||||
dev = epc->dev.parent;
|
||||
dma_free_coherent(dev, epf_bar[bar].size, addr,
|
||||
epf_bar[bar].phys_addr);
|
||||
|
||||
epf_bar[bar].phys_addr = 0;
|
||||
epf_bar[bar].addr = NULL;
|
||||
epf_bar[bar].size = 0;
|
||||
epf_bar[bar].barno = 0;
|
||||
epf_bar[bar].flags = 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_free_space);
|
||||
|
||||
@@ -101,15 +146,18 @@ EXPORT_SYMBOL_GPL(pci_epf_free_space);
|
||||
* @size: the size of the memory that has to be allocated
|
||||
* @bar: the BAR number corresponding to the allocated register space
|
||||
* @align: alignment size for the allocation region
|
||||
* @type: Identifies if the allocation is for primary EPC or secondary EPC
|
||||
*
|
||||
* Invoke to allocate memory for the PCI EPF register space.
|
||||
*/
|
||||
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
||||
size_t align)
|
||||
size_t align, enum pci_epc_interface_type type)
|
||||
{
|
||||
void *space;
|
||||
struct device *dev = epf->epc->dev.parent;
|
||||
struct pci_epf_bar *epf_bar;
|
||||
dma_addr_t phys_addr;
|
||||
struct pci_epc *epc;
|
||||
struct device *dev;
|
||||
void *space;
|
||||
|
||||
if (size < 128)
|
||||
size = 128;
|
||||
@@ -119,17 +167,26 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
||||
else
|
||||
size = roundup_pow_of_two(size);
|
||||
|
||||
if (type == PRIMARY_INTERFACE) {
|
||||
epc = epf->epc;
|
||||
epf_bar = epf->bar;
|
||||
} else {
|
||||
epc = epf->sec_epc;
|
||||
epf_bar = epf->sec_epc_bar;
|
||||
}
|
||||
|
||||
dev = epc->dev.parent;
|
||||
space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
|
||||
if (!space) {
|
||||
dev_err(dev, "failed to allocate mem space\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
epf->bar[bar].phys_addr = phys_addr;
|
||||
epf->bar[bar].addr = space;
|
||||
epf->bar[bar].size = size;
|
||||
epf->bar[bar].barno = bar;
|
||||
epf->bar[bar].flags |= upper_32_bits(size) ?
|
||||
epf_bar[bar].phys_addr = phys_addr;
|
||||
epf_bar[bar].addr = space;
|
||||
epf_bar[bar].size = size;
|
||||
epf_bar[bar].barno = bar;
|
||||
epf_bar[bar].flags |= upper_32_bits(size) ?
|
||||
PCI_BASE_ADDRESS_MEM_TYPE_64 :
|
||||
PCI_BASE_ADDRESS_MEM_TYPE_32;
|
||||
|
||||
@@ -282,22 +339,6 @@ struct pci_epf *pci_epf_create(const char *name)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_create);
|
||||
|
||||
const struct pci_epf_device_id *
|
||||
pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf)
|
||||
{
|
||||
if (!id || !epf)
|
||||
return NULL;
|
||||
|
||||
while (*id->name) {
|
||||
if (strcmp(epf->name, id->name) == 0)
|
||||
return id;
|
||||
id++;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_match_device);
|
||||
|
||||
static void pci_epf_dev_release(struct device *dev)
|
||||
{
|
||||
struct pci_epf *epf = to_pci_epf(dev);
|
||||
|
@@ -176,9 +176,6 @@ int acpiphp_unregister_attention(struct acpiphp_attention_info *info);
|
||||
int acpiphp_register_hotplug_slot(struct acpiphp_slot *slot, unsigned int sun);
|
||||
void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
|
||||
|
||||
/* acpiphp_glue.c */
|
||||
typedef int (*acpiphp_callback)(struct acpiphp_slot *slot, void *data);
|
||||
|
||||
int acpiphp_enable_slot(struct acpiphp_slot *slot);
|
||||
int acpiphp_disable_slot(struct acpiphp_slot *slot);
|
||||
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
|
||||
|
@@ -21,8 +21,9 @@
|
||||
#include "pci-bridge-emul.h"
|
||||
|
||||
#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
|
||||
#define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2)
|
||||
#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
|
||||
#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
|
||||
#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF)
|
||||
|
||||
/**
|
||||
* struct pci_bridge_reg_behavior - register bits behaviors
|
||||
@@ -46,7 +47,8 @@ struct pci_bridge_reg_behavior {
|
||||
u32 w1c;
|
||||
};
|
||||
|
||||
static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
|
||||
static const
|
||||
struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
|
||||
[PCI_VENDOR_ID / 4] = { .ro = ~0 },
|
||||
[PCI_COMMAND / 4] = {
|
||||
.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
|
||||
@@ -164,7 +166,8 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
|
||||
},
|
||||
};
|
||||
|
||||
static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
|
||||
static const
|
||||
struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] = {
|
||||
[PCI_CAP_LIST_ID / 4] = {
|
||||
/*
|
||||
* Capability ID, Next Capability Pointer and
|
||||
@@ -260,6 +263,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
|
||||
int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
|
||||
unsigned int flags)
|
||||
{
|
||||
BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
|
||||
|
||||
bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
|
||||
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
|
||||
bridge->conf.cache_line_size = 0x10;
|
||||
|
@@ -4030,6 +4030,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
||||
ret = logic_pio_register_range(range);
|
||||
if (ret)
|
||||
kfree(range);
|
||||
|
||||
/* Ignore duplicates due to deferred probing */
|
||||
if (ret == -EEXIST)
|
||||
ret = 0;
|
||||
#endif
|
||||
|
||||
return ret;
|
||||
|
@@ -133,14 +133,6 @@ config PCIE_PTM
|
||||
This is only useful if you have devices that support PTM, but it
|
||||
is safe to enable even if you don't.
|
||||
|
||||
config PCIE_BW
|
||||
bool "PCI Express Bandwidth Change Notification"
|
||||
depends on PCIEPORTBUS
|
||||
help
|
||||
This enables PCI Express Bandwidth Change Notification. If
|
||||
you know link width or rate changes occur only to correct
|
||||
unreliable links, you may answer Y.
|
||||
|
||||
config PCIE_EDR
|
||||
bool "PCI Express Error Disconnect Recover support"
|
||||
depends on PCIE_DPC && ACPI
|
||||
|
@@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
|
||||
obj-$(CONFIG_PCIE_PME) += pme.o
|
||||
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
||||
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
||||
obj-$(CONFIG_PCIE_BW) += bw_notification.o
|
||||
obj-$(CONFIG_PCIE_EDR) += edr.o
|
||||
|
@@ -1388,7 +1388,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
|
||||
if (type == PCI_EXP_TYPE_RC_END)
|
||||
root = dev->rcec;
|
||||
else
|
||||
root = dev;
|
||||
root = pcie_find_root_port(dev);
|
||||
|
||||
/*
|
||||
* If the platform retained control of AER, an RCiEP may not have
|
||||
@@ -1414,7 +1414,8 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
|
||||
}
|
||||
} else {
|
||||
rc = pci_bus_error_reset(dev);
|
||||
pci_info(dev, "Root Port link has been reset (%d)\n", rc);
|
||||
pci_info(dev, "%s Port link has been reset (%d)\n",
|
||||
pci_is_root_bus(dev->bus) ? "Root" : "Downstream", rc);
|
||||
}
|
||||
|
||||
if ((host->native_aer || pcie_ports_native) && aer) {
|
||||
|
@@ -1,138 +0,0 @@
|
||||
// SPDX-License-Identifier: GPL-2.0+
|
||||
/*
|
||||
* PCI Express Link Bandwidth Notification services driver
|
||||
* Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
|
||||
*
|
||||
* Copyright (C) 2019, Dell Inc
|
||||
*
|
||||
* The PCIe Link Bandwidth Notification provides a way to notify the
|
||||
* operating system when the link width or data rate changes. This
|
||||
* capability is required for all root ports and downstream ports
|
||||
* supporting links wider than x1 and/or multiple link speeds.
|
||||
*
|
||||
* This service port driver hooks into the bandwidth notification interrupt
|
||||
* and warns when links become degraded in operation.
|
||||
*/
|
||||
|
||||
#define dev_fmt(fmt) "bw_notification: " fmt
|
||||
|
||||
#include "../pci.h"
|
||||
#include "portdrv.h"
|
||||
|
||||
static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
|
||||
{
|
||||
int ret;
|
||||
u32 lnk_cap;
|
||||
|
||||
ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
|
||||
return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
|
||||
}
|
||||
|
||||
static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
|
||||
{
|
||||
u16 lnk_ctl;
|
||||
|
||||
pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
|
||||
|
||||
pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
||||
lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
|
||||
pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
||||
}
|
||||
|
||||
static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
|
||||
{
|
||||
u16 lnk_ctl;
|
||||
|
||||
pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
||||
lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
|
||||
pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
||||
}
|
||||
|
||||
static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
|
||||
{
|
||||
struct pcie_device *srv = context;
|
||||
struct pci_dev *port = srv->port;
|
||||
u16 link_status, events;
|
||||
int ret;
|
||||
|
||||
ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
|
||||
events = link_status & PCI_EXP_LNKSTA_LBMS;
|
||||
|
||||
if (ret != PCIBIOS_SUCCESSFUL || !events)
|
||||
return IRQ_NONE;
|
||||
|
||||
pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
|
||||
pcie_update_link_speed(port->subordinate, link_status);
|
||||
return IRQ_WAKE_THREAD;
|
||||
}
|
||||
|
||||
static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
|
||||
{
|
||||
struct pcie_device *srv = context;
|
||||
struct pci_dev *port = srv->port;
|
||||
struct pci_dev *dev;
|
||||
|
||||
/*
|
||||
* Print status from downstream devices, not this root port or
|
||||
* downstream switch port.
|
||||
*/
|
||||
down_read(&pci_bus_sem);
|
||||
list_for_each_entry(dev, &port->subordinate->devices, bus_list)
|
||||
pcie_report_downtraining(dev);
|
||||
up_read(&pci_bus_sem);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* Single-width or single-speed ports do not have to support this. */
|
||||
if (!pcie_link_bandwidth_notification_supported(srv->port))
|
||||
return -ENODEV;
|
||||
|
||||
ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
|
||||
pcie_bw_notification_handler,
|
||||
IRQF_SHARED, "PCIe BW notif", srv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pcie_enable_link_bandwidth_notification(srv->port);
|
||||
pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
|
||||
{
|
||||
pcie_disable_link_bandwidth_notification(srv->port);
|
||||
free_irq(srv->irq, srv);
|
||||
}
|
||||
|
||||
static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
|
||||
{
|
||||
pcie_disable_link_bandwidth_notification(srv->port);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
|
||||
{
|
||||
pcie_enable_link_bandwidth_notification(srv->port);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
|
||||
.name = "pcie_bw_notification",
|
||||
.port_type = PCIE_ANY_PORT,
|
||||
.service = PCIE_PORT_SERVICE_BWNOTIF,
|
||||
.probe = pcie_bandwidth_notification_probe,
|
||||
.suspend = pcie_bandwidth_notification_suspend,
|
||||
.resume = pcie_bandwidth_notification_resume,
|
||||
.remove = pcie_bandwidth_notification_remove,
|
||||
};
|
||||
|
||||
int __init pcie_bandwidth_notification_init(void)
|
||||
{
|
||||
return pcie_port_service_register(&pcie_bandwidth_notification_driver);
|
||||
}
|
@@ -198,8 +198,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
||||
pci_dbg(bridge, "broadcast error_detected message\n");
|
||||
if (state == pci_channel_io_frozen) {
|
||||
pci_walk_bridge(bridge, report_frozen_detected, &status);
|
||||
status = reset_subordinates(bridge);
|
||||
if (status != PCI_ERS_RESULT_RECOVERED) {
|
||||
if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
|
||||
pci_warn(bridge, "subordinate device reset failed\n");
|
||||
goto failed;
|
||||
}
|
||||
@@ -231,15 +230,14 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
||||
pci_walk_bridge(bridge, report_resume, &status);
|
||||
|
||||
/*
|
||||
* If we have native control of AER, clear error status in the Root
|
||||
* Port or Downstream Port that signaled the error. If the
|
||||
* platform retained control of AER, it is responsible for clearing
|
||||
* this status. In that case, the signaling device may not even be
|
||||
* visible to the OS.
|
||||
* If we have native control of AER, clear error status in the device
|
||||
* that detected the error. If the platform retained control of AER,
|
||||
* it is responsible for clearing this status. In that case, the
|
||||
* signaling device may not even be visible to the OS.
|
||||
*/
|
||||
if (host->native_aer || pcie_ports_native) {
|
||||
pcie_clear_device_status(bridge);
|
||||
pci_aer_clear_nonfatal_status(bridge);
|
||||
pcie_clear_device_status(dev);
|
||||
pci_aer_clear_nonfatal_status(dev);
|
||||
}
|
||||
pci_info(bridge, "device recovery successful\n");
|
||||
return status;
|
||||
|
@@ -53,12 +53,6 @@ int pcie_dpc_init(void);
|
||||
static inline int pcie_dpc_init(void) { return 0; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCIE_BW
|
||||
int pcie_bandwidth_notification_init(void);
|
||||
#else
|
||||
static inline int pcie_bandwidth_notification_init(void) { return 0; }
|
||||
#endif
|
||||
|
||||
/* Port Type */
|
||||
#define PCIE_ANY_PORT (~0)
|
||||
|
||||
|
@@ -153,7 +153,8 @@ static void pcie_portdrv_remove(struct pci_dev *dev)
|
||||
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
|
||||
pci_channel_state_t error)
|
||||
{
|
||||
/* Root Port has no impact. Always recovers. */
|
||||
if (error == pci_channel_io_frozen)
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
return PCI_ERS_RESULT_CAN_RECOVER;
|
||||
}
|
||||
|
||||
@@ -255,7 +256,6 @@ static void __init pcie_init_services(void)
|
||||
pcie_pme_init();
|
||||
pcie_dpc_init();
|
||||
pcie_hp_init();
|
||||
pcie_bandwidth_notification_init();
|
||||
}
|
||||
|
||||
static int __init pcie_portdrv_init(void)
|
||||
|
@@ -168,7 +168,6 @@ struct pci_bus *pci_find_next_bus(const struct pci_bus *from)
|
||||
struct list_head *n;
|
||||
struct pci_bus *b = NULL;
|
||||
|
||||
WARN_ON(in_interrupt());
|
||||
down_read(&pci_bus_sem);
|
||||
n = from ? from->node.next : pci_root_buses.next;
|
||||
if (n != &pci_root_buses)
|
||||
@@ -196,7 +195,6 @@ struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
WARN_ON(in_interrupt());
|
||||
down_read(&pci_bus_sem);
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
@@ -274,7 +272,6 @@ static struct pci_dev *pci_get_dev_by_id(const struct pci_device_id *id,
|
||||
struct device *dev_start = NULL;
|
||||
struct pci_dev *pdev = NULL;
|
||||
|
||||
WARN_ON(in_interrupt());
|
||||
if (from)
|
||||
dev_start = &from->dev;
|
||||
dev = bus_find_device(&pci_bus_type, dev_start, (void *)id,
|
||||
@@ -381,7 +378,6 @@ int pci_dev_present(const struct pci_device_id *ids)
|
||||
{
|
||||
struct pci_dev *found = NULL;
|
||||
|
||||
WARN_ON(in_interrupt());
|
||||
while (ids->vendor || ids->subvendor || ids->class_mask) {
|
||||
found = pci_get_dev_by_id(ids, NULL);
|
||||
if (found) {
|
||||
|
@@ -410,10 +410,16 @@ EXPORT_SYMBOL(pci_release_resource);
|
||||
int pci_resize_resource(struct pci_dev *dev, int resno, int size)
|
||||
{
|
||||
struct resource *res = dev->resource + resno;
|
||||
struct pci_host_bridge *host;
|
||||
int old, ret;
|
||||
u32 sizes;
|
||||
u16 cmd;
|
||||
|
||||
/* Check if we must preserve the firmware's resource assignment */
|
||||
host = pci_find_host_bridge(dev->bus);
|
||||
if (host->preserve_config)
|
||||
return -ENOTSUPP;
|
||||
|
||||
/* Make sure the resource isn't assigned before resizing it. */
|
||||
if (!(res->flags & IORESOURCE_UNSET))
|
||||
return -EBUSY;
|
||||
|
@@ -20,7 +20,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
|
||||
u16 word;
|
||||
u32 dword;
|
||||
long err;
|
||||
long cfg_ret;
|
||||
int cfg_ret;
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
@@ -46,7 +46,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
|
||||
}
|
||||
|
||||
err = -EIO;
|
||||
if (cfg_ret != PCIBIOS_SUCCESSFUL)
|
||||
if (cfg_ret)
|
||||
goto error;
|
||||
|
||||
switch (len) {
|
||||
@@ -105,7 +105,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
||||
if (err)
|
||||
break;
|
||||
err = pci_user_write_config_byte(dev, off, byte);
|
||||
if (err != PCIBIOS_SUCCESSFUL)
|
||||
if (err)
|
||||
err = -EIO;
|
||||
break;
|
||||
|
||||
@@ -114,7 +114,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
||||
if (err)
|
||||
break;
|
||||
err = pci_user_write_config_word(dev, off, word);
|
||||
if (err != PCIBIOS_SUCCESSFUL)
|
||||
if (err)
|
||||
err = -EIO;
|
||||
break;
|
||||
|
||||
@@ -123,7 +123,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
||||
if (err)
|
||||
break;
|
||||
err = pci_user_write_config_dword(dev, off, dword);
|
||||
if (err != PCIBIOS_SUCCESSFUL)
|
||||
if (err)
|
||||
err = -EIO;
|
||||
break;
|
||||
|
||||
|
Reference in New Issue
Block a user