515.43.04

This commit is contained in:
Andy Ritger
2022-05-09 13:18:59 -07:00
commit 1739a20efc
2519 changed files with 1060036 additions and 0 deletions

26
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,26 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**NVIDIA Driver Version**
Please write the version of the NVIDIA driver you are using.
**GPU**
Please write the particular model of NVIDIA GPU you are using.
**Describe the bug**
Please write a clear and concise description of what the bug is.
**To Reproduce**
Please write the steps to reproduce the behavior.
**Expected behavior**
Please write a clear and concise description of what you expected to happen.
**Please reproduce the problem, run nvidia-bug-report.sh, and attach the resulting nvidia-bug-report.log.gz.**

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
*.o
*.o_binary
*.o.cmd
*.o.d
_out/

141
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,141 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contribute to a positive environment for our
community include:
* Using welcoming and inclusive language
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, or to ban temporarily or permanently any
contributor for other behaviors that they deem inappropriate, threatening,
offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when
an individual is representing the project or its community. Examples of representing
our community include using an official e-mail address, posting via an official
social media account, or acting as an appointed representative at an online or
offline event. Representation of a project may be further defined and clarified
by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders and moderators responsible for enforcement at
GitHub_Conduct@nvidia.com.
All complaints will be reviewed and investigated and will result in a response
that is deemed necessary and appropriate to the circumstances. Leaders and moderators
are obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Moderators who do not follow or enforce the Code of Conduct in good faith
may face temporary or permanent repercussions as determined by other members of the
communitys leadership.
## Enforcement Guidelines
Community leaders and moderators will follow these Community Impact Guidelines
in determining the consequences for any action they deem in violation of this
Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community moderators, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating an egregious single violation, or a pattern of
violation of community standards, including sustained inappropriate behavior,
harassment of an individual, or aggression toward or disparagement of classes of
individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

369
COPYING Normal file
View File

@@ -0,0 +1,369 @@
Except where noted otherwise, the individual files within this package are
licensed as MIT:
Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
However, when linked together to form a Linux kernel module, the resulting Linux
kernel module is dual licensed as MIT/GPLv2.
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

76
Makefile Normal file
View File

@@ -0,0 +1,76 @@
###########################################################################
# This is the top level makefile for the NVIDIA Linux kernel module source
# package.
#
# To build: run `make modules`
# To install the build kernel modules: run (as root) `make modules_install`
###########################################################################
include utils.mk
all: modules
nv_kernel_o = src/nvidia/$(OUTPUTDIR)/nv-kernel.o
nv_kernel_o_binary = kernel-open/nvidia/nv-kernel.o_binary
nv_modeset_kernel_o = src/nvidia-modeset/$(OUTPUTDIR)/nv-modeset-kernel.o
nv_modeset_kernel_o_binary = kernel-open/nvidia-modeset/nv-modeset-kernel.o_binary
.PHONY: $(nv_kernel_o) $(nv_modeset_kernel_o) modules modules_install
###########################################################################
# nv-kernel.o is the OS agnostic portion of nvidia.ko
###########################################################################
$(nv_kernel_o):
$(MAKE) -C src/nvidia
$(nv_kernel_o_binary): $(nv_kernel_o)
cd $(dir $@) && ln -sf ../../$^ $(notdir $@)
###########################################################################
# nv-modeset-kernel.o is the OS agnostic portion of nvidia-modeset.ko
###########################################################################
$(nv_modeset_kernel_o):
$(MAKE) -C src/nvidia-modeset
$(nv_modeset_kernel_o_binary): $(nv_modeset_kernel_o)
cd $(dir $@) && ln -sf ../../$^ $(notdir $@)
###########################################################################
# After the OS agnostic portions are built, descend into kernel-open/ and build
# the kernel modules with kbuild.
###########################################################################
modules: $(nv_kernel_o_binary) $(nv_modeset_kernel_o_binary)
$(MAKE) -C kernel-open modules
###########################################################################
# Install the built kernel modules using kbuild.
###########################################################################
modules_install:
$(MAKE) -C kernel-open modules_install
###########################################################################
# clean
###########################################################################
.PHONY: clean nvidia.clean nvidia-modeset.clean kernel-open.clean
clean: nvidia.clean nvidia-modeset.clean kernel-open.clean
nvidia.clean:
$(MAKE) -C src/nvidia clean
nvidia-modeset.clean:
$(MAKE) -C src/nvidia-modeset clean
kernel-open.clean:
$(MAKE) -C kernel-open clean

164
README.md Normal file
View File

@@ -0,0 +1,164 @@
# NVIDIA Linux Open GPU Kernel Module Source
This is the source release of the NVIDIA Linux open GPU kernel modules,
version 515.43.04.
## How to Build
To build:
make modules -j`nproc`
To install, first uninstall any existing NVIDIA kernel modules. Then,
as root:
make modules_install -j`nproc`
Note that the kernel modules built here must be used with gsp.bin
firmware and user-space NVIDIA GPU driver components from a corresponding
515.43.04 driver release. This can be achieved by installing
the NVIDIA GPU driver from the .run file using the `--no-kernel-modules`
option. E.g.,
sh ./NVIDIA-Linux-[...].run --no-kernel-modules
## Supported Target CPU Architectures
Currently, the kernel modules can be built for x86_64 or aarch64.
If cross-compiling, set these variables on the make command line:
TARGET_ARCH=aarch64|x86_64
CC
LD
AR
CXX
OBJCOPY
E.g.,
# compile on x86_64 for aarch64
make modules -j`nproc` \
TARGET_ARCH=aarch64 \
CC=aarch64-linux-gnu-gcc \
LD=aarch64-linux-gnu-ld \
AR=aarch64-linux-gnu-ar \
CXX=aarch64-linux-gnu-g++ \
OBJCOPY=aarch64-linux-gnu-objcopy
## Other Build Knobs
NV_VERBOSE - Set this to "1" to print each complete command executed;
otherwise, a succinct "CC" line is printed.
DEBUG - Set this to "1" to build the kernel modules as debug. By default, the
build compiles without debugging information. This also enables
various debug log messages in the kernel modules.
These variables can be set on the make command line. E.g.,
make modules -j`nproc` NV_VERBOSE=1
## Supported Toolchains
Any reasonably modern version of gcc or clang can be used to build the
kernel modules. Note that the kernel interface layers of the kernel
modules must be built with the toolchain that was used to build the
kernel.
## Supported Linux Kernel Versions
The NVIDIA open kernel modules support the same range of Linux kernel
versions that are supported with the proprietary NVIDIA kernel modules.
This is currently Linux kernel 3.10 or newer.
## How to Contribute
Contributions can be made by creating a pull request on
https://github.com/NVIDIA/open-gpu-kernel-modules
We'll respond via github.
Note that when submitting a pull request, you will be prompted to accept
a Contributor License Agreement.
This code base is shared with NVIDIA's proprietary drivers, and various
processing is performed on the shared code to produce the source code that is
published here. This has several implications for the foreseeable future:
* The github repository will function mostly as a snapshot of each driver
release.
* We do not expect to be able to provide revision history for individual
changes that were made to NVIDIA's shared code base. There will likely
only be one git commit per driver release.
* We may not be able to reflect individual contributions as separate
git commits in the github repository.
* Because the code undergoes various processing prior to publishing here,
contributions made here require manual merging to be applied to the shared
code base. Therefore, large refactoring changes made here may be difficult to
merge and accept back into the shared code base. If you have large
refactoring to suggest, please contact us in advance, so we can coordinate.
## How to Report Issues
Problems specific to the Open GPU Kernel Modules can be reported in the
Issues section of the https://github.com/NVIDIA/open-gpu-kernel-modules
repository.
Further, any of the existing bug reporting venues can be used to communicate
problems to NVIDIA, such as our forum:
https://forums.developer.nvidia.com/c/gpu-graphics/linux/148
or linux-bugs@nvidia.com.
Please see the 'NVIDIA Contact Info and Additional Resources' section
of the NVIDIA GPU Driver README for details.
Please see the separate [SECURITY.md](SECURITY.md) document if you
believe you have discovered a security vulnerability in this software.
## Kernel Interface and OS-Agnostic Components of Kernel Modules
Most of NVIDIA's kernel modules are split into two components:
* An "OS-agnostic" component: this is the component of each kernel module
that is independent of operating system.
* A "kernel interface layer": this is the component of each kernel module
that is specific to the Linux kernel version and configuration.
When packaged in the NVIDIA .run installation package, the OS-agnostic
component is provided as a binary: it is large and time-consuming to
compile, so pre-built versions are provided so that the user does
not have to compile it during every driver installation. For the
nvidia.ko kernel module, this component is named "nv-kernel.o_binary".
For the nvidia-modeset.ko kernel module, this component is named
"nv-modeset-kernel.o_binary". Neither nvidia-drm.ko nor nvidia-uvm.ko
have OS-agnostic components.
The kernel interface layer component for each kernel module must be built
for the target kernel.
## Directory Structure Layout
- `kernel-open/` The kernel interface layer
- `kernel-open/nvidia/` The kernel interface layer for nvidia.ko
- `kernel-open/nvidia-drm/` The kernel interface layer for nvidia-drm.ko
- `kernel-open/nvidia-modeset/` The kernel interface layer for nvidia-modeset.ko
- `kernel-open/nvidia-uvm/` The kernel interface layer for nvidia-uvm.ko
- `src/` The OS-agnostic code
- `src/nvidia/` The OS-agnostic code for nvidia.ko
- `src/nvidia-modeset/` The OS-agnostic code for nvidia-modeset.ko
- `src/common/` Utility code used by one or more of nvidia.ko and nvidia-modeset.ko

16
SECURITY.md Normal file
View File

@@ -0,0 +1,16 @@
# Report a Security Vulnerability
To report a potential security vulnerability in any NVIDIA product, please use either:
* this web form: [Security Vulnerability Submission Form](https://www.nvidia.com/object/submit-security-vulnerability.html), or
* send email to: [NVIDIA PSIRT](mailto:psirt@nvidia.com)
**OEM Partners should contact their NVIDIA Customer Program Manager**
If reporting a potential vulnerability via email, please encrypt it using NVIDIAs public PGP key ([see PGP Key page](https://www.nvidia.com/en-us/security/pgp-key/)) and include the following information:
* Product/Driver name and version/branch that contains the vulnerability
* Type of vulnerability (code execution, denial of service, buffer overflow, etc.)
* Instructions to reproduce the vulnerability
* Proof-of-concept or exploit code
* Potential impact of the vulnerability, including how an attacker could exploit the vulnerability
See https://www.nvidia.com/en-us/security/ for past NVIDIA Security Bulletins and Notices.

9
kernel-open/.gitignore vendored Normal file
View File

@@ -0,0 +1,9 @@
.*.cmd
*.ko
*.mod
*.mod.c
conftest/
conftest[0-9]*.c
modules.order
Module.symvers
nv_compiler.h

245
kernel-open/Kbuild Normal file
View File

@@ -0,0 +1,245 @@
###########################################################################
# Kbuild file for NVIDIA Linux GPU driver kernel modules
###########################################################################
#
# The parent makefile is expected to define:
#
# NV_KERNEL_SOURCES : The root of the kernel source tree.
# NV_KERNEL_OUTPUT : The kernel's output tree.
# NV_KERNEL_MODULES : A whitespace-separated list of modules to build.
# ARCH : The target CPU architecture: x86_64|arm64|powerpc
#
# Kbuild provides the variables:
#
# $(src) : The directory containing this Kbuild file.
# $(obj) : The directory where the output from this build is written.
#
NV_BUILD_TYPE ?= release
#
# Utility macro ASSIGN_PER_OBJ_CFLAGS: to control CFLAGS on a
# per-object basis, Kbuild honors the 'CFLAGS_$(object)' variable.
# E.g., "CFLAGS_nv.o" for CFLAGS that are specific to nv.o. Use this
# macro to assign 'CFLAGS_$(object)' variables for multiple object
# files.
#
# $(1): The object files.
# $(2): The CFLAGS to add for those object files.
#
# With kernel git commit 54b8ae66ae1a3454a7645d159a482c31cd89ab33, the
# handling of object-specific CFLAGs, CFLAGS_$(object) has changed. Prior to
# this commit, the CFLAGS_$(object) variable was required to be defined with
# only the the object name (<CFLAGS_somefile.o>). With the aforementioned git
# commit, it is now required to give Kbuild relative paths along-with the
# object name (CFLAGS_<somepath>/somefile.o>). As a result, CFLAGS_$(object)
# is set twice, once with a relative path to the object files and once with
# just the object files.
#
ASSIGN_PER_OBJ_CFLAGS = \
$(foreach _cflags_variable, \
$(notdir $(1)) $(1), \
$(eval $(addprefix CFLAGS_,$(_cflags_variable)) += $(2)))
#
# Include the specifics of the individual NVIDIA kernel modules.
#
# Each of these should:
# - Append to 'obj-m', to indicate the kernel module that should be built.
# - Define the object files that should get built to produce the kernel module.
# - Tie into conftest (see the description below).
#
NV_UNDEF_BEHAVIOR_SANITIZER ?=
ifeq ($(NV_UNDEF_BEHAVIOR_SANITIZER),1)
UBSAN_SANITIZE := y
endif
$(foreach _module, $(NV_KERNEL_MODULES), \
$(eval include $(src)/$(_module)/$(_module).Kbuild))
#
# Define CFLAGS that apply to all the NVIDIA kernel modules. EXTRA_CFLAGS
# is deprecated since 2.6.24 in favor of ccflags-y, but we need to support
# older kernels which do not have ccflags-y. Newer kernels append
# $(EXTRA_CFLAGS) to ccflags-y for compatibility.
#
EXTRA_CFLAGS += -I$(src)/common/inc
EXTRA_CFLAGS += -I$(src)
EXTRA_CFLAGS += -Wall -MD $(DEFINES) $(INCLUDES) -Wno-cast-qual -Wno-error -Wno-format-extra-args
EXTRA_CFLAGS += -D__KERNEL__ -DMODULE -DNVRM
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"515.43.04\"
EXTRA_CFLAGS += -Wno-unused-function
ifneq ($(NV_BUILD_TYPE),debug)
EXTRA_CFLAGS += -Wuninitialized
endif
EXTRA_CFLAGS += -fno-strict-aliasing
ifeq ($(ARCH),arm64)
EXTRA_CFLAGS += -mstrict-align
endif
ifeq ($(NV_BUILD_TYPE),debug)
EXTRA_CFLAGS += -g -gsplit-dwarf
endif
EXTRA_CFLAGS += -ffreestanding
ifeq ($(ARCH),arm64)
EXTRA_CFLAGS += -mgeneral-regs-only -march=armv8-a
endif
ifeq ($(ARCH),x86_64)
EXTRA_CFLAGS += -mno-red-zone -mcmodel=kernel
endif
ifeq ($(ARCH),powerpc)
EXTRA_CFLAGS += -mlittle-endian -mno-strict-align -mno-altivec
endif
EXTRA_CFLAGS += -DNV_UVM_ENABLE
EXTRA_CFLAGS += $(call cc-option,-Werror=undef,)
EXTRA_CFLAGS += -DNV_SPECTRE_V2=$(NV_SPECTRE_V2)
EXTRA_CFLAGS += -DNV_KERNEL_INTERFACE_LAYER
#
# Detect SGI UV systems and apply system-specific optimizations.
#
ifneq ($(wildcard /proc/sgi_uv),)
EXTRA_CFLAGS += -DNV_CONFIG_X86_UV
endif
#
# The conftest.sh script tests various aspects of the target kernel.
# The per-module Kbuild files included above should:
#
# - Append to the NV_CONFTEST_*_COMPILE_TESTS variables to indicate
# which conftests they require.
# - Append to the NV_OBJECTS_DEPEND_ON_CONFTEST variable any object files
# that depend on conftest.
#
# The conftest machinery below will run the requested tests and
# generate the appropriate header files.
#
CC ?= cc
LD ?= ld
NV_CONFTEST_SCRIPT := $(src)/conftest.sh
NV_CONFTEST_HEADER := $(obj)/conftest/headers.h
NV_CONFTEST_CMD := /bin/sh $(NV_CONFTEST_SCRIPT) \
"$(CC)" $(ARCH) $(NV_KERNEL_SOURCES) $(NV_KERNEL_OUTPUT)
NV_CFLAGS_FROM_CONFTEST := $(shell $(NV_CONFTEST_CMD) build_cflags)
NV_CONFTEST_CFLAGS = $(NV_CFLAGS_FROM_CONFTEST) $(EXTRA_CFLAGS) -fno-pie
NV_CONFTEST_COMPILE_TEST_HEADERS := $(obj)/conftest/macros.h
NV_CONFTEST_COMPILE_TEST_HEADERS += $(obj)/conftest/functions.h
NV_CONFTEST_COMPILE_TEST_HEADERS += $(obj)/conftest/symbols.h
NV_CONFTEST_COMPILE_TEST_HEADERS += $(obj)/conftest/types.h
NV_CONFTEST_COMPILE_TEST_HEADERS += $(obj)/conftest/generic.h
NV_CONFTEST_HEADERS := $(obj)/conftest/patches.h
NV_CONFTEST_HEADERS += $(obj)/conftest/headers.h
NV_CONFTEST_HEADERS += $(NV_CONFTEST_COMPILE_TEST_HEADERS)
#
# Generate a header file for a single conftest compile test. Each compile test
# header depends on conftest.sh, as well as the generated conftest/headers.h
# file, which is included in the compile test preamble.
#
$(obj)/conftest/compile-tests/%.h: $(NV_CONFTEST_SCRIPT) $(NV_CONFTEST_HEADER)
@mkdir -p $(obj)/conftest/compile-tests
@echo " CONFTEST: $(notdir $*)"
@$(NV_CONFTEST_CMD) compile_tests '$(NV_CONFTEST_CFLAGS)' \
$(notdir $*) > $@
#
# Concatenate a conftest/*.h header from its constituent compile test headers
#
# $(1): The name of the concatenated header
# $(2): The list of compile tests that make up the header
#
define NV_GENERATE_COMPILE_TEST_HEADER
$(obj)/conftest/$(1).h: $(addprefix $(obj)/conftest/compile-tests/,$(addsuffix .h,$(2)))
@mkdir -p $(obj)/conftest
@# concatenate /dev/null to prevent cat from hanging when $$^ is empty
@cat $$^ /dev/null > $$@
endef
#
# Generate the conftest compile test headers from the lists of compile tests
# provided by the module-specific Kbuild files.
#
NV_CONFTEST_FUNCTION_COMPILE_TESTS ?=
NV_CONFTEST_GENERIC_COMPILE_TESTS ?=
NV_CONFTEST_MACRO_COMPILE_TESTS ?=
NV_CONFTEST_SYMBOL_COMPILE_TESTS ?=
NV_CONFTEST_TYPE_COMPILE_TESTS ?=
$(eval $(call NV_GENERATE_COMPILE_TEST_HEADER,functions,$(NV_CONFTEST_FUNCTION_COMPILE_TESTS)))
$(eval $(call NV_GENERATE_COMPILE_TEST_HEADER,generic,$(NV_CONFTEST_GENERIC_COMPILE_TESTS)))
$(eval $(call NV_GENERATE_COMPILE_TEST_HEADER,macros,$(NV_CONFTEST_MACRO_COMPILE_TESTS)))
$(eval $(call NV_GENERATE_COMPILE_TEST_HEADER,symbols,$(NV_CONFTEST_SYMBOL_COMPILE_TESTS)))
$(eval $(call NV_GENERATE_COMPILE_TEST_HEADER,types,$(NV_CONFTEST_TYPE_COMPILE_TESTS)))
$(obj)/conftest/patches.h: $(NV_CONFTEST_SCRIPT)
@mkdir -p $(obj)/conftest
@$(NV_CONFTEST_CMD) patch_check > $@
$(obj)/conftest/headers.h: $(NV_CONFTEST_SCRIPT)
@mkdir -p $(obj)/conftest
@$(NV_CONFTEST_CMD) test_kernel_headers '$(NV_CONFTEST_CFLAGS)' > $@
clean-dirs := $(obj)/conftest
# For any object files that depend on conftest, declare the dependency here.
$(addprefix $(obj)/,$(NV_OBJECTS_DEPEND_ON_CONFTEST)): | $(NV_CONFTEST_HEADERS)
# Sanity checks of the build environment and target system/kernel
BUILD_SANITY_CHECKS = \
cc_sanity_check \
cc_version_check \
dom0_sanity_check \
xen_sanity_check \
preempt_rt_sanity_check \
vgpu_kvm_sanity_check \
module_symvers_sanity_check
.PHONY: $(BUILD_SANITY_CHECKS)
$(BUILD_SANITY_CHECKS):
@$(NV_CONFTEST_CMD) $@ full_output
# Perform all sanity checks before generating the conftest headers
$(NV_CONFTEST_HEADERS): | $(BUILD_SANITY_CHECKS)
# Make the conftest headers depend on the kernel version string
$(obj)/conftest/uts_release: NV_GENERATE_UTS_RELEASE
@mkdir -p $(dir $@)
@NV_UTS_RELEASE="// Kernel version: `$(NV_CONFTEST_CMD) compile_tests '$(NV_CONFTEST_CFLAGS)' uts_release`"; \
if ! [ -f "$@" ] || [ "$$NV_UTS_RELEASE" != "`cat $@`" ]; \
then echo "$$NV_UTS_RELEASE" > $@; fi
.PHONY: NV_GENERATE_UTS_RELEASE
$(NV_CONFTEST_HEADERS): $(obj)/conftest/uts_release

126
kernel-open/Makefile Normal file
View File

@@ -0,0 +1,126 @@
#
# This Makefile was automatically generated; do not edit.
#
###########################################################################
# Makefile for NVIDIA Linux GPU driver kernel modules
###########################################################################
# This makefile is read twice: when a user or nvidia-installer invokes
# 'make', this file is read. It then invokes the Linux kernel's
# Kbuild. Modern versions of Kbuild will then read the Kbuild file in
# this directory. However, old versions of Kbuild will instead read
# this Makefile. For backwards compatibility, when read by Kbuild
# (recognized by KERNELRELEASE not being empty), do nothing but
# include the Kbuild file in this directory.
ifneq ($(KERNELRELEASE),)
include $(src)/Kbuild
else
# Determine the location of the Linux kernel source tree, and of the
# kernel's output tree. Use this to invoke Kbuild, and pass the paths
# to the source and output trees to NVIDIA's Kbuild file via
# NV_KERNEL_{SOURCES,OUTPUT}.
ifdef SYSSRC
KERNEL_SOURCES := $(SYSSRC)
else
KERNEL_UNAME ?= $(shell uname -r)
KERNEL_MODLIB := /lib/modules/$(KERNEL_UNAME)
KERNEL_SOURCES := $(shell test -d $(KERNEL_MODLIB)/source && echo $(KERNEL_MODLIB)/source || echo $(KERNEL_MODLIB)/build)
endif
KERNEL_OUTPUT := $(KERNEL_SOURCES)
KBUILD_PARAMS :=
ifdef SYSOUT
ifneq ($(SYSOUT), $(KERNEL_SOURCES))
KERNEL_OUTPUT := $(SYSOUT)
KBUILD_PARAMS := KBUILD_OUTPUT=$(KERNEL_OUTPUT)
endif
else
KERNEL_UNAME ?= $(shell uname -r)
KERNEL_MODLIB := /lib/modules/$(KERNEL_UNAME)
ifeq ($(KERNEL_SOURCES), $(KERNEL_MODLIB)/source)
KERNEL_OUTPUT := $(KERNEL_MODLIB)/build
KBUILD_PARAMS := KBUILD_OUTPUT=$(KERNEL_OUTPUT)
endif
endif
CC ?= cc
LD ?= ld
OBJDUMP ?= objdump
ifndef ARCH
ARCH := $(shell uname -m | sed -e 's/i.86/i386/' \
-e 's/armv[0-7]\w\+/arm/' \
-e 's/aarch64/arm64/' \
-e 's/ppc64le/powerpc/' \
)
endif
NV_KERNEL_MODULES ?= $(wildcard nvidia nvidia-uvm nvidia-vgpu-vfio nvidia-modeset nvidia-drm nvidia-peermem)
NV_KERNEL_MODULES := $(filter-out $(NV_EXCLUDE_KERNEL_MODULES), \
$(NV_KERNEL_MODULES))
NV_VERBOSE ?=
SPECTRE_V2_RETPOLINE ?= 0
ifeq ($(NV_VERBOSE),1)
KBUILD_PARAMS += V=1
endif
KBUILD_PARAMS += -C $(KERNEL_SOURCES) M=$(CURDIR)
KBUILD_PARAMS += ARCH=$(ARCH)
KBUILD_PARAMS += NV_KERNEL_SOURCES=$(KERNEL_SOURCES)
KBUILD_PARAMS += NV_KERNEL_OUTPUT=$(KERNEL_OUTPUT)
KBUILD_PARAMS += NV_KERNEL_MODULES="$(NV_KERNEL_MODULES)"
KBUILD_PARAMS += INSTALL_MOD_DIR=kernel/drivers/video
KBUILD_PARAMS += NV_SPECTRE_V2=$(SPECTRE_V2_RETPOLINE)
.PHONY: modules module clean clean_conftest modules_install
modules clean modules_install:
@$(MAKE) "LD=$(LD)" "CC=$(CC)" "OBJDUMP=$(OBJDUMP)" $(KBUILD_PARAMS) $@
@if [ "$@" = "modules" ]; then \
for module in $(NV_KERNEL_MODULES); do \
if [ -x split-object-file.sh ]; then \
./split-object-file.sh $$module.ko; \
fi; \
done; \
fi
# Compatibility target for scripts that may be directly calling the
# "module" target from the old build system.
module: modules
# Check if the any of kernel module linker scripts exist. If they do, pass
# them as linker options (via variable NV_MODULE_LD_SCRIPTS) while building
# the kernel interface object files. These scripts do some processing on the
# module symbols on which the Linux kernel's module resolution is dependent
# and hence must be used whenever present.
LD_SCRIPT ?= $(KERNEL_SOURCES)/scripts/module-common.lds \
$(KERNEL_SOURCES)/arch/$(ARCH)/kernel/module.lds \
$(KERNEL_OUTPUT)/scripts/module.lds
NV_MODULE_COMMON_SCRIPTS := $(foreach s, $(wildcard $(LD_SCRIPT)), -T $(s))
# Use $* to match the stem % in the kernel interface file %-linux.o. Replace
# "nv" with "nvidia" in $* as appropriate: e.g. nv-modeset-linux.o links
# nvidia-modeset.mod.o and nvidia-modeset/nv-modeset-interface.o. The kernel
# interface file must have the .mod.o object linked into it: otherwise, the
# kernel module produced by linking the interface against its corresponding
# core object file will not be loadable. The .mod.o file is built as part of
# the MODPOST process (stage 2), so the rule to build the kernel interface
# cannot be defined in the *Kbuild files, which are only used during stage 1.
%-linux.o: modules
$(LD) $(NV_MODULE_COMMON_SCRIPTS) -r -o $@ \
$(subst nv,nvidia,$*).mod.o $(subst nv,nvidia,$*)/$*-interface.o
# Kbuild's "clean" rule won't clean up the conftest headers on its own, and
# clean-dirs doesn't appear to work as advertised.
clean_conftest:
$(RM) -r conftest
clean: clean_conftest
endif # KERNELRELEASE

View File

@@ -0,0 +1,34 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2014 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _CONFTEST_H
#define _CONFTEST_H
#include "conftest/headers.h"
#include "conftest/functions.h"
#include "conftest/generic.h"
#include "conftest/macros.h"
#include "conftest/symbols.h"
#include "conftest/types.h"
#endif

View File

@@ -0,0 +1,459 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2018-2018 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/*! \brief
* Define compile time symbols for CPU type and operating system type.
* This file should only contain preprocessor commands so that
* there are no dependencies on other files.
*
* cpuopsys.h
*
* Copyright (c) 2001, Nvidia Corporation. All rights reserved.
*/
/*!
* Uniform names are defined for compile time options to distinguish
* CPU types and Operating systems.
* Distinctions between CPU and OpSys should be orthogonal.
*
* These uniform names have initially been defined by keying off the
* makefile/build names defined for builds in the OpenGL group.
* Getting the uniform names defined for other builds may require
* different qualifications.
*
* The file is placed here to allow for the possibility of all driver
* components using the same naming convention for conditional compilation.
*/
#ifndef CPUOPSYS_H
#define CPUOPSYS_H
/*****************************************************************************/
/* Define all OS/CPU-Chip related symbols */
/* ***** WINDOWS variations */
#if defined(_WIN32) || defined(_WIN16)
# define NV_WINDOWS
# if defined(_WIN32_WINNT)
# define NV_WINDOWS_NT
# elif defined(_WIN32_WCE)
# define NV_WINDOWS_CE
# elif !defined(NV_MODS)
# define NV_WINDOWS_9X
# endif
#endif /* _WIN32 || defined(_WIN16) */
/* ***** Unix variations */
#if defined(__linux__) && !defined(NV_LINUX) && !defined(NV_VMWARE)
# define NV_LINUX
#endif /* defined(__linux__) */
#if defined(__VMWARE__) && !defined(NV_VMWARE)
# define NV_VMWARE
#endif /* defined(__VMWARE__) */
/* SunOS + gcc */
#if defined(__sun__) && defined(__svr4__) && !defined(NV_SUNOS)
# define NV_SUNOS
#endif /* defined(__sun__) && defined(__svr4__) */
/* SunOS + Sun Compiler (named SunPro, Studio or Forte) */
#if defined(__SUNPRO_C) || defined(__SUNPRO_CC)
# define NV_SUNPRO_C
# define NV_SUNOS
#endif /* defined(_SUNPRO_C) || defined(__SUNPRO_CC) */
#if defined(__FreeBSD__) && !defined(NV_BSD)
# define NV_BSD
#endif /* defined(__FreeBSD__) */
/* XXXar don't define NV_UNIX on MacOSX or vxworks or QNX */
#if (defined(__unix__) || defined(__unix) || defined(__INTEGRITY) ) && !defined(nvmacosx) && !defined(vxworks) && !defined(NV_UNIX) && !defined(__QNX__) && !defined(__QNXNTO__)/* XXX until removed from Makefiles */
# define NV_UNIX
#endif /* defined(__unix__) */
#if (defined(__QNX__) || defined(__QNXNTO__)) && !defined(NV_QNX)
# define NV_QNX
#endif
#if (defined(__ANDROID__) || defined(ANDROID)) && !defined(NV_ANDROID)
# define NV_ANDROID
#endif
#if defined(DceCore) && !defined(NV_DCECORE)
# define NV_DCECORE
#endif
/* ***** Apple variations */
#if defined(macintosh) || defined(__APPLE__)
# define NV_MACINTOSH
# if defined(__MACH__)
# define NV_MACINTOSH_OSX
# else
# define NV_MACINTOSH_OS9
# endif
# if defined(__LP64__)
# define NV_MACINTOSH_64
# endif
#endif /* defined(macintosh) */
/* ***** VxWorks */
/* Tornado 2.21 is gcc 2.96 and #defines __vxworks. */
/* Tornado 2.02 is gcc 2.7.2 and doesn't define any OS symbol, so we rely on */
/* the build system #defining vxworks. */
#if defined(__vxworks) || defined(vxworks)
# define NV_VXWORKS
#endif
/* ***** Integrity OS */
#if defined(__INTEGRITY)
# if !defined(NV_INTEGRITY)
# define NV_INTEGRITY
# endif
#endif
/* ***** Processor type variations */
/* Note: The prefix NV_CPU_* is taken by Nvcm.h */
#if ((defined(_M_IX86) || defined(__i386__) || defined(__i386)) && !defined(NVCPU_X86)) /* XXX until removed from Makefiles */
/* _M_IX86 for windows, __i386__ for Linux (or any x86 using gcc) */
/* __i386 for Studio compiler on Solaris x86 */
# define NVCPU_X86 /* any IA32 machine (not x86-64) */
# define NVCPU_MIN_PAGE_SHIFT 12
#endif
#if defined(_WIN32) && defined(_M_IA64)
# define NVCPU_IA64_WINDOWS /* any IA64 for Windows opsys */
#endif
#if defined(NV_LINUX) && defined(__ia64__)
# define NVCPU_IA64_LINUX /* any IA64 for Linux opsys */
#endif
#if defined(NVCPU_IA64_WINDOWS) || defined(NVCPU_IA64_LINUX) || defined(IA64)
# define NVCPU_IA64 /* any IA64 for any opsys */
#endif
#if (defined(NV_MACINTOSH) && !(defined(__i386__) || defined(__x86_64__))) || defined(__PPC__) || defined(__ppc)
# if defined(__powerpc64__) && defined(__LITTLE_ENDIAN__)
# ifndef NVCPU_PPC64LE
# define NVCPU_PPC64LE /* PPC 64-bit little endian */
# endif
# else
# ifndef NVCPU_PPC
# define NVCPU_PPC /* any non-PPC64LE PowerPC architecture */
# endif
# ifndef NV_BIG_ENDIAN
# define NV_BIG_ENDIAN
# endif
# endif
# define NVCPU_FAMILY_PPC
#endif
#if defined(__x86_64) || defined(AMD64) || defined(_M_AMD64)
# define NVCPU_X86_64 /* any x86-64 for any opsys */
#endif
#if defined(NVCPU_X86) || defined(NVCPU_X86_64)
# define NVCPU_FAMILY_X86
#endif
#if defined(__riscv) && (__riscv_xlen==64)
# define NVCPU_RISCV64
# if defined(__nvriscv)
# define NVCPU_NVRISCV64
# endif
#endif
#if defined(__arm__) || defined(_M_ARM)
/*
* 32-bit instruction set on, e.g., ARMv7 or AArch32 execution state
* on ARMv8
*/
# define NVCPU_ARM
# define NVCPU_MIN_PAGE_SHIFT 12
#endif
#if defined(__aarch64__) || defined(__ARM64__) || defined(_M_ARM64)
# define NVCPU_AARCH64 /* 64-bit A64 instruction set on ARMv8 */
# define NVCPU_MIN_PAGE_SHIFT 12
#endif
#if defined(NVCPU_ARM) || defined(NVCPU_AARCH64)
# define NVCPU_FAMILY_ARM
#endif
#if defined(__SH4__)
# ifndef NVCPU_SH4
# define NVCPU_SH4 /* Renesas (formerly Hitachi) SH4 */
# endif
# if defined NV_WINDOWS_CE
# define NVCPU_MIN_PAGE_SHIFT 12
# endif
#endif
/* For Xtensa processors */
#if defined(__XTENSA__)
# define NVCPU_XTENSA
# if defined(__XTENSA_EB__)
# define NV_BIG_ENDIAN
# endif
#endif
/*
* Other flavors of CPU type should be determined at run-time.
* For example, an x86 architecture with/without SSE.
* If it can compile, then there's no need for a compile time option.
* For some current GCC limitations, these may be fixed by using the Intel
* compiler for certain files in a Linux build.
*/
/* The minimum page size can be determined from the minimum page shift */
#if defined(NVCPU_MIN_PAGE_SHIFT)
#define NVCPU_MIN_PAGE_SIZE (1 << NVCPU_MIN_PAGE_SHIFT)
#endif
#if defined(NVCPU_IA64) || defined(NVCPU_X86_64) || \
defined(NV_MACINTOSH_64) || defined(NVCPU_AARCH64) || \
defined(NVCPU_PPC64LE) || defined(NVCPU_RISCV64)
# define NV_64_BITS /* all architectures where pointers are 64 bits */
#else
/* we assume 32 bits. I don't see a need for NV_16_BITS. */
#endif
/* For verification-only features not intended to be included in normal drivers */
#if (defined(NV_MODS) || defined(NV_GSP_MODS)) && defined(DEBUG) && !defined(DISABLE_VERIF_FEATURES)
#define NV_VERIF_FEATURES
#endif
/*
* New, safer family of #define's -- these ones use 0 vs. 1 rather than
* defined/!defined. This is advantageous because if you make a typo,
* say misspelled ENDIAN:
*
* #if NVCPU_IS_BIG_ENDAIN
*
* ...some compilers can give you a warning telling you that you screwed up.
* The compiler can also give you a warning if you forget to #include
* "cpuopsys.h" in your code before the point where you try to use these
* conditionals.
*
* Also, the names have been prefixed in more cases with "CPU" or "OS" for
* increased clarity. You can tell the names apart from the old ones because
* they all use "_IS_" in the name.
*
* Finally, these can be used in "if" statements and not just in #if's. For
* example:
*
* if (NVCPU_IS_BIG_ENDIAN) x = Swap32(x);
*
* Maybe some day in the far-off future these can replace the old #define's.
*/
#if defined(NV_MODS)
#define NV_IS_MODS 1
#else
#define NV_IS_MODS 0
#endif
#if defined(NV_GSP_MODS)
#define NV_IS_GSP_MODS 1
#else
#define NV_IS_GSP_MODS 0
#endif
#if defined(NV_WINDOWS)
#define NVOS_IS_WINDOWS 1
#else
#define NVOS_IS_WINDOWS 0
#endif
#if defined(NV_WINDOWS_CE)
#define NVOS_IS_WINDOWS_CE 1
#else
#define NVOS_IS_WINDOWS_CE 0
#endif
#if defined(NV_LINUX)
#define NVOS_IS_LINUX 1
#else
#define NVOS_IS_LINUX 0
#endif
#if defined(NV_UNIX)
#define NVOS_IS_UNIX 1
#else
#define NVOS_IS_UNIX 0
#endif
#if defined(NV_BSD)
#define NVOS_IS_FREEBSD 1
#else
#define NVOS_IS_FREEBSD 0
#endif
#if defined(NV_SUNOS)
#define NVOS_IS_SOLARIS 1
#else
#define NVOS_IS_SOLARIS 0
#endif
#if defined(NV_VMWARE)
#define NVOS_IS_VMWARE 1
#else
#define NVOS_IS_VMWARE 0
#endif
#if defined(NV_QNX)
#define NVOS_IS_QNX 1
#else
#define NVOS_IS_QNX 0
#endif
#if defined(NV_ANDROID)
#define NVOS_IS_ANDROID 1
#else
#define NVOS_IS_ANDROID 0
#endif
#if defined(NV_MACINTOSH)
#define NVOS_IS_MACINTOSH 1
#else
#define NVOS_IS_MACINTOSH 0
#endif
#if defined(NV_VXWORKS)
#define NVOS_IS_VXWORKS 1
#else
#define NVOS_IS_VXWORKS 0
#endif
#if defined(NV_LIBOS)
#define NVOS_IS_LIBOS 1
#else
#define NVOS_IS_LIBOS 0
#endif
#if defined(NV_INTEGRITY)
#define NVOS_IS_INTEGRITY 1
#else
#define NVOS_IS_INTEGRITY 0
#endif
#if defined(NVCPU_X86)
#define NVCPU_IS_X86 1
#else
#define NVCPU_IS_X86 0
#endif
#if defined(NVCPU_RISCV64)
#define NVCPU_IS_RISCV64 1
#else
#define NVCPU_IS_RISCV64 0
#endif
#if defined(NVCPU_NVRISCV64)
#define NVCPU_IS_NVRISCV64 1
#else
#define NVCPU_IS_NVRISCV64 0
#endif
#if defined(NVCPU_IA64)
#define NVCPU_IS_IA64 1
#else
#define NVCPU_IS_IA64 0
#endif
#if defined(NVCPU_X86_64)
#define NVCPU_IS_X86_64 1
#else
#define NVCPU_IS_X86_64 0
#endif
#if defined(NVCPU_FAMILY_X86)
#define NVCPU_IS_FAMILY_X86 1
#else
#define NVCPU_IS_FAMILY_X86 0
#endif
#if defined(NVCPU_PPC)
#define NVCPU_IS_PPC 1
#else
#define NVCPU_IS_PPC 0
#endif
#if defined(NVCPU_PPC64LE)
#define NVCPU_IS_PPC64LE 1
#else
#define NVCPU_IS_PPC64LE 0
#endif
#if defined(NVCPU_FAMILY_PPC)
#define NVCPU_IS_FAMILY_PPC 1
#else
#define NVCPU_IS_FAMILY_PPC 0
#endif
#if defined(NVCPU_ARM)
#define NVCPU_IS_ARM 1
#else
#define NVCPU_IS_ARM 0
#endif
#if defined(NVCPU_AARCH64)
#define NVCPU_IS_AARCH64 1
#else
#define NVCPU_IS_AARCH64 0
#endif
#if defined(NVCPU_FAMILY_ARM)
#define NVCPU_IS_FAMILY_ARM 1
#else
#define NVCPU_IS_FAMILY_ARM 0
#endif
#if defined(NVCPU_SH4)
#define NVCPU_IS_SH4 1
#else
#define NVCPU_IS_SH4 0
#endif
#if defined(NVCPU_XTENSA)
#define NVCPU_IS_XTENSA 1
#else
#define NVCPU_IS_XTENSA 0
#endif
#if defined(NV_BIG_ENDIAN)
#define NVCPU_IS_BIG_ENDIAN 1
#else
#define NVCPU_IS_BIG_ENDIAN 0
#endif
#if defined(NV_64_BITS)
#define NVCPU_IS_64_BITS 1
#else
#define NVCPU_IS_64_BITS 0
#endif
#if defined(NVCPU_FAMILY_ARM)
#define NVCPU_IS_PCIE_CACHE_COHERENT 0
#else
#define NVCPU_IS_PCIE_CACHE_COHERENT 1
#endif
#if defined(NV_DCECORE)
#define NVOS_IS_DCECORE 1
#else
#define NVOS_IS_DCECORE 0
#endif
/*****************************************************************************/
#endif /* CPUOPSYS_H */

View File

@@ -0,0 +1,94 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2019 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_CAPS_H_
#define _NV_CAPS_H_
#include <nv-kernel-interface-api.h>
/*
* Opaque OS-specific struct; on Linux, this has member
* 'struct proc_dir_entry'.
*/
typedef struct nv_cap nv_cap_t;
/*
* Creates directory named "capabilities" under the provided path.
*
* @param[in] path Absolute path
*
* Returns a valid nv_cap_t upon success. Otherwise, returns NULL.
*/
nv_cap_t* NV_API_CALL nv_cap_init(const char *path);
/*
* Creates capability directory entry
*
* @param[in] parent_cap Parent capability directory
* @param[in] name Capability directory's name
* @param[in] mode Capability directory's access mode
*
* Returns a valid nv_cap_t upon success. Otherwise, returns NULL.
*/
nv_cap_t* NV_API_CALL nv_cap_create_dir_entry(nv_cap_t *parent_cap, const char *name, int mode);
/*
* Creates capability file entry
*
* @param[in] parent_cap Parent capability directory
* @param[in] name Capability file's name
* @param[in] mode Capability file's access mode
*
* Returns a valid nv_cap_t upon success. Otherwise, returns NULL.
*/
nv_cap_t* NV_API_CALL nv_cap_create_file_entry(nv_cap_t *parent_cap, const char *name, int mode);
/*
* Destroys capability entry
*
* @param[in] cap Capability entry
*/
void NV_API_CALL nv_cap_destroy_entry(nv_cap_t *cap);
/*
* Validates and duplicates the provided file descriptor
*
* @param[in] cap Capability entry
* @param[in] fd File descriptor to be validated
*
* Returns duplicate fd upon success. Otherwise, returns -1.
*/
int NV_API_CALL nv_cap_validate_and_dup_fd(const nv_cap_t *cap, int fd);
/*
* Closes file descriptor
*
* This function should be used to close duplicate file descriptors
* returned by nv_cap_validate_and_dup_fd.
*
* @param[in] fd File descriptor to be validated
*
*/
void NV_API_CALL nv_cap_close_fd(int fd);
#endif /* _NV_CAPS_H_ */

View File

@@ -0,0 +1,31 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_DMABUF_H_
#define _NV_DMABUF_H_
#include "nv-linux.h"
NV_STATUS nv_dma_buf_export(nv_state_t *, nv_ioctl_export_to_dma_buf_fd_t *);
#endif // _NV_DMABUF_H_

View File

@@ -0,0 +1,44 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_GPU_INFO_H_
#define _NV_GPU_INFO_H_
typedef struct {
NvU32 gpu_id;
struct {
NvU32 domain;
NvU8 bus, slot, function;
} pci_info;
/*
* opaque OS-specific pointer; on Linux, this is a pointer to the
* 'struct device' for the GPU.
*/
void *os_device_ptr;
} nv_gpu_info_t;
#define NV_MAX_GPUS 32
#endif /* _NV_GPU_INFO_H_ */

View File

@@ -0,0 +1,96 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_HASH_H__
#define __NV_HASH_H__
#include "conftest.h"
#include "nv-list-helpers.h"
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/hash.h>
#if defined(NV_LINUX_STRINGHASH_H_PRESENT)
#include <linux/stringhash.h> /* full_name_hash() */
#else
#include <linux/dcache.h>
#endif
#if (NV_FULL_NAME_HASH_ARGUMENT_COUNT == 3)
#define nv_string_hash(_str) full_name_hash(NULL, _str, strlen(_str))
#else
#define nv_string_hash(_str) full_name_hash(_str, strlen(_str))
#endif
/**
* This naive hashtable was introduced by commit d9b482c8ba19 (v3.7, 2012-10-31).
* To support older kernels import necessary functionality from
* <linux/hashtable.h>.
*/
#define NV_HASH_SIZE(name) (ARRAY_SIZE(name))
#define NV_HASH_BITS(name) ilog2(NV_HASH_SIZE(name))
/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
#define NV_HASH_MIN(val, bits) \
(sizeof(val) <= 4 ? hash_32(val, bits) : hash_long(val, bits))
#define NV_DECLARE_HASHTABLE(name, bits) \
struct hlist_head name[1 << (bits)]
static inline void _nv_hash_init(struct hlist_head *ht, unsigned int sz)
{
unsigned int i;
for (i = 0; i < sz; i++)
{
INIT_HLIST_HEAD(&ht[i]);
}
}
/**
* nv_hash_init - initialize a hash table
* @hashtable: hashtable to be initialized
*/
#define nv_hash_init(hashtable) _nv_hash_init(hashtable, NV_HASH_SIZE(hashtable))
/**
* nv_hash_add - add an object to a hashtable
* @hashtable: hashtable to add to
* @node: the &struct hlist_node of the object to be added
* @key: the key of the object to be added
*/
#define nv_hash_add(hashtable, node, key) \
hlist_add_head(node, &hashtable[NV_HASH_MIN(key, NV_HASH_BITS(hashtable))])
/**
* nv_hash_for_each_possible - iterate over all possible objects hashing to the
* same bucket
* @name: hashtable to iterate
* @obj: the type * to use as a loop cursor for each entry
* @member: the name of the hlist_node within the struct
* @key: the key of the objects to iterate over
*/
#define nv_hash_for_each_possible(name, obj, member, key) \
nv_hlist_for_each_entry(obj, &name[NV_HASH_MIN(key, NV_HASH_BITS(name))], member)
#endif // __NV_HASH_H__

View File

@@ -0,0 +1,125 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1999-2018 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_HYPERVISOR_H_
#define _NV_HYPERVISOR_H_
#include <nv-kernel-interface-api.h>
// Enums for supported hypervisor types.
// New hypervisor type should be added before OS_HYPERVISOR_CUSTOM_FORCED
typedef enum _HYPERVISOR_TYPE
{
OS_HYPERVISOR_XEN = 0,
OS_HYPERVISOR_VMWARE,
OS_HYPERVISOR_HYPERV,
OS_HYPERVISOR_KVM,
OS_HYPERVISOR_PARALLELS,
OS_HYPERVISOR_CUSTOM_FORCED,
OS_HYPERVISOR_UNKNOWN
} HYPERVISOR_TYPE;
#define CMD_VGPU_VFIO_WAKE_WAIT_QUEUE 0
#define CMD_VGPU_VFIO_INJECT_INTERRUPT 1
#define CMD_VGPU_VFIO_REGISTER_MDEV 2
#define CMD_VGPU_VFIO_PRESENT 3
#define MAX_VF_COUNT_PER_GPU 64
typedef enum _VGPU_TYPE_INFO
{
VGPU_TYPE_NAME = 0,
VGPU_TYPE_DESCRIPTION,
VGPU_TYPE_INSTANCES,
} VGPU_TYPE_INFO;
typedef struct
{
void *vgpuVfioRef;
void *waitQueue;
void *nv;
NvU32 *vgpuTypeIds;
NvU32 numVgpuTypes;
NvU32 domain;
NvU8 bus;
NvU8 slot;
NvU8 function;
NvBool is_virtfn;
} vgpu_vfio_info;
typedef struct
{
NvU32 domain;
NvU8 bus;
NvU8 slot;
NvU8 function;
NvBool isNvidiaAttached;
NvBool isMdevAttached;
} vgpu_vf_pci_info;
typedef enum VGPU_CMD_PROCESS_VF_INFO_E
{
NV_VGPU_SAVE_VF_INFO = 0,
NV_VGPU_REMOVE_VF_PCI_INFO = 1,
NV_VGPU_REMOVE_VF_MDEV_INFO = 2,
NV_VGPU_GET_VF_INFO = 3
} VGPU_CMD_PROCESS_VF_INFO;
typedef enum VGPU_DEVICE_STATE_E
{
NV_VGPU_DEV_UNUSED = 0,
NV_VGPU_DEV_OPENED = 1,
NV_VGPU_DEV_IN_USE = 2
} VGPU_DEVICE_STATE;
typedef enum _VMBUS_CMD_TYPE
{
VMBUS_CMD_TYPE_INVALID = 0,
VMBUS_CMD_TYPE_SETUP = 1,
VMBUS_CMD_TYPE_SENDPACKET = 2,
VMBUS_CMD_TYPE_CLEANUP = 3,
} VMBUS_CMD_TYPE;
typedef struct
{
NvU32 request_id;
NvU32 page_count;
NvU64 *pPfns;
void *buffer;
NvU32 bufferlen;
} vmbus_send_packet_cmd_params;
typedef struct
{
NvU32 override_sint;
NvU8 *nv_guid;
} vmbus_setup_cmd_params;
/*
* Function prototypes
*/
HYPERVISOR_TYPE NV_API_CALL nv_get_hypervisor_type(void);
#endif // _NV_HYPERVISOR_H_

View File

@@ -0,0 +1,84 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NV_IOCTL_NUMA_H
#define NV_IOCTL_NUMA_H
#if defined(NV_LINUX)
#include <nv-ioctl-numbers.h>
#if defined(NV_KERNEL_INTERFACE_LAYER)
#include <linux/types.h>
#else
#include <stdint.h>
#if !defined(__aligned)
#define __aligned(n) __attribute__((aligned(n)))
#endif
#endif
#define NV_ESC_NUMA_INFO (NV_IOCTL_BASE + 15)
#define NV_ESC_SET_NUMA_STATUS (NV_IOCTL_BASE + 16)
#define NV_IOCTL_NUMA_INFO_MAX_OFFLINE_ADDRESSES 64
typedef struct offline_addresses
{
uint64_t addresses[NV_IOCTL_NUMA_INFO_MAX_OFFLINE_ADDRESSES] __aligned(8);
uint32_t numEntries;
} nv_offline_addresses_t;
/* per-device NUMA memory info as assigned by the system */
typedef struct nv_ioctl_numa_info
{
int32_t nid;
int32_t status;
uint64_t memblock_size __aligned(8);
uint64_t numa_mem_addr __aligned(8);
uint64_t numa_mem_size __aligned(8);
nv_offline_addresses_t offline_addresses __aligned(8);
} nv_ioctl_numa_info_t;
/* set the status of the device NUMA memory */
typedef struct nv_ioctl_set_numa_status
{
int32_t status;
} nv_ioctl_set_numa_status_t;
#define NV_IOCTL_NUMA_STATUS_DISABLED 0
#define NV_IOCTL_NUMA_STATUS_OFFLINE 1
#define NV_IOCTL_NUMA_STATUS_ONLINE_IN_PROGRESS 2
#define NV_IOCTL_NUMA_STATUS_ONLINE 3
#define NV_IOCTL_NUMA_STATUS_ONLINE_FAILED 4
#define NV_IOCTL_NUMA_STATUS_OFFLINE_IN_PROGRESS 5
#define NV_IOCTL_NUMA_STATUS_OFFLINE_FAILED 6
#endif
#endif

View File

@@ -0,0 +1,43 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NV_IOCTL_NUMBERS_H
#define NV_IOCTL_NUMBERS_H
/* NOTE: using an ioctl() number > 55 will overflow! */
#define NV_IOCTL_MAGIC 'F'
#define NV_IOCTL_BASE 200
#define NV_ESC_CARD_INFO (NV_IOCTL_BASE + 0)
#define NV_ESC_REGISTER_FD (NV_IOCTL_BASE + 1)
#define NV_ESC_ALLOC_OS_EVENT (NV_IOCTL_BASE + 6)
#define NV_ESC_FREE_OS_EVENT (NV_IOCTL_BASE + 7)
#define NV_ESC_STATUS_CODE (NV_IOCTL_BASE + 9)
#define NV_ESC_CHECK_VERSION_STR (NV_IOCTL_BASE + 10)
#define NV_ESC_IOCTL_XFER_CMD (NV_IOCTL_BASE + 11)
#define NV_ESC_ATTACH_GPUS_TO_FD (NV_IOCTL_BASE + 12)
#define NV_ESC_QUERY_DEVICE_INTR (NV_IOCTL_BASE + 13)
#define NV_ESC_SYS_PARAMS (NV_IOCTL_BASE + 14)
#define NV_ESC_EXPORT_TO_DMABUF_FD (NV_IOCTL_BASE + 17)
#endif

View File

@@ -0,0 +1,145 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NV_IOCTL_H
#define NV_IOCTL_H
#include <nv-ioctl-numbers.h>
#include <nvtypes.h>
typedef struct {
NvU32 domain; /* PCI domain number */
NvU8 bus; /* PCI bus number */
NvU8 slot; /* PCI slot number */
NvU8 function; /* PCI function number */
NvU16 vendor_id; /* PCI vendor ID */
NvU16 device_id; /* PCI device ID */
} nv_pci_info_t;
/*
* ioctl()'s with parameter structures too large for the
* _IOC cmd layout use the nv_ioctl_xfer_t structure
* and the NV_ESC_IOCTL_XFER_CMD ioctl() to pass the actual
* size and user argument pointer into the RM, which
* will then copy it to/from kernel space in separate steps.
*/
typedef struct nv_ioctl_xfer
{
NvU32 cmd;
NvU32 size;
NvP64 ptr NV_ALIGN_BYTES(8);
} nv_ioctl_xfer_t;
typedef struct nv_ioctl_card_info
{
NvBool valid;
nv_pci_info_t pci_info; /* PCI config information */
NvU32 gpu_id;
NvU16 interrupt_line;
NvU64 reg_address NV_ALIGN_BYTES(8);
NvU64 reg_size NV_ALIGN_BYTES(8);
NvU64 fb_address NV_ALIGN_BYTES(8);
NvU64 fb_size NV_ALIGN_BYTES(8);
NvU32 minor_number;
NvU8 dev_name[10]; /* device names such as vmgfx[0-32] for vmkernel */
} nv_ioctl_card_info_t;
/* alloc event */
typedef struct nv_ioctl_alloc_os_event
{
NvHandle hClient;
NvHandle hDevice;
NvU32 fd;
NvU32 Status;
} nv_ioctl_alloc_os_event_t;
/* free event */
typedef struct nv_ioctl_free_os_event
{
NvHandle hClient;
NvHandle hDevice;
NvU32 fd;
NvU32 Status;
} nv_ioctl_free_os_event_t;
/* status code */
typedef struct nv_ioctl_status_code
{
NvU32 domain;
NvU8 bus;
NvU8 slot;
NvU32 status;
} nv_ioctl_status_code_t;
/* check version string */
#define NV_RM_API_VERSION_STRING_LENGTH 64
typedef struct nv_ioctl_rm_api_version
{
NvU32 cmd;
NvU32 reply;
char versionString[NV_RM_API_VERSION_STRING_LENGTH];
} nv_ioctl_rm_api_version_t;
#define NV_RM_API_VERSION_CMD_STRICT 0
#define NV_RM_API_VERSION_CMD_RELAXED '1'
#define NV_RM_API_VERSION_CMD_OVERRIDE '2'
#define NV_RM_API_VERSION_REPLY_UNRECOGNIZED 0
#define NV_RM_API_VERSION_REPLY_RECOGNIZED 1
typedef struct nv_ioctl_query_device_intr
{
NvU32 intrStatus NV_ALIGN_BYTES(4);
NvU32 status;
} nv_ioctl_query_device_intr;
/* system parameters that the kernel driver may use for configuration */
typedef struct nv_ioctl_sys_params
{
NvU64 memblock_size NV_ALIGN_BYTES(8);
} nv_ioctl_sys_params_t;
typedef struct nv_ioctl_register_fd
{
int ctl_fd;
} nv_ioctl_register_fd_t;
#define NV_DMABUF_EXPORT_MAX_HANDLES 128
typedef struct nv_ioctl_export_to_dma_buf_fd
{
int fd;
NvHandle hClient;
NvU32 totalObjects;
NvU32 numObjects;
NvU32 index;
NvU64 totalSize NV_ALIGN_BYTES(8);
NvHandle handles[NV_DMABUF_EXPORT_MAX_HANDLES];
NvU64 offsets[NV_DMABUF_EXPORT_MAX_HANDLES] NV_ALIGN_BYTES(8);
NvU64 sizes[NV_DMABUF_EXPORT_MAX_HANDLES] NV_ALIGN_BYTES(8);
NvU32 status;
} nv_ioctl_export_to_dma_buf_fd_t;
#endif

View File

@@ -0,0 +1,41 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2018-2018 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_KERNEL_INTERFACE_API_H
#define _NV_KERNEL_INTERFACE_API_H
/**************************************************************************************************************
*
* File: nv-kernel-interface-api.h
*
* Description:
* Defines the NV API related macros.
*
**************************************************************************************************************/
#if NVOS_IS_UNIX && NVCPU_IS_X86_64 && defined(__use_altstack__)
#define NV_API_CALL __attribute__((altstack(0)))
#else
#define NV_API_CALL
#endif
#endif /* _NV_KERNEL_INTERFACE_API_H */

View File

@@ -0,0 +1,61 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_KREF_H__
#define __NV_KREF_H__
#include <asm/atomic.h>
typedef struct nv_kref
{
atomic_t refcount;
} nv_kref_t;
static inline void nv_kref_init(nv_kref_t *nv_kref)
{
atomic_set(&nv_kref->refcount, 1);
}
static inline void nv_kref_get(nv_kref_t *nv_kref)
{
atomic_inc(&nv_kref->refcount);
}
static inline int nv_kref_put(nv_kref_t *nv_kref,
void (*release)(nv_kref_t *nv_kref))
{
if (atomic_dec_and_test(&nv_kref->refcount))
{
release(nv_kref);
return 1;
}
return 0;
}
static inline unsigned int nv_kref_read(const nv_kref_t *nv_kref)
{
return atomic_read(&nv_kref->refcount);
}
#endif // __NV_KREF_H__

View File

@@ -0,0 +1,255 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2016 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_KTHREAD_QUEUE_H__
#define __NV_KTHREAD_QUEUE_H__
#include <linux/types.h> // atomic_t
#include <linux/list.h> // list
#include <linux/sched.h> // task_struct
#include <linux/numa.h> // NUMA_NO_NODE
#include "conftest.h"
#if defined(NV_LINUX_SEMAPHORE_H_PRESENT)
#include <linux/semaphore.h>
#else
#include <asm/semaphore.h>
#endif
////////////////////////////////////////////////////////////////////////////////
// nv_kthread_q:
//
// 1. API and overview
//
// This "nv_kthread_q" system implements a simple queuing system for deferred
// work. The nv_kthread_q system has goals and use cases that are similar to
// the named workqueues in the Linux kernel, but nv_kthread_q is much (10x or
// so) smaller, simpler--and correspondingly less general. Deferred work
// items are put into a queue, and run within the context of a dedicated set
// of kernel threads (kthread).
//
// In order to avoid confusion with the Linux workqueue system, I have
// avoided using the term "work", and instead refer to "queues" (also called
// "q's") and "queue items" (also called "q_items"), in both variable names
// and comments.
//
// This module depends only upon the Linux kernel.
//
// Queue items that are submitted to separate nv_kthread_q instances are
// guaranteed to be run in different kthreads.
//
// Queue items that are submitted to the same nv_kthread_q are not guaranteed
// to be serialized, nor are they guaranteed to run concurrently.
//
// 2. Allocations
//
// The caller allocates queues and queue items. The nv_kthread_q APIs do
// the initialization (zeroing and setup) of queues and queue items.
// Allocation is handled that way, because one of the first use cases is a
// bottom half interrupt handler, and for that, queue items should be
// pre-allocated (for example, one per GPU), so that no allocation is
// required in the top-half interrupt handler. Relevant API calls:
//
// 3. Queue initialization
//
// nv_kthread_q_init() initializes a queue on the current NUMA node.
//
// or
//
// nv_kthread_q_init_on_node() initializes a queue on a specific NUMA node.
//
// 3. Scheduling things for the queue to run
//
// The nv_kthread_q_schedule_q_item() routine will schedule a q_item to run.
//
// 4. Stopping the queue(s)
//
// The nv_kthread_q_stop() routine will flush the queue, and safely stop
// the kthread, before returning.
//
////////////////////////////////////////////////////////////////////////////////
typedef struct nv_kthread_q nv_kthread_q_t;
typedef struct nv_kthread_q_item nv_kthread_q_item_t;
typedef void (*nv_q_func_t)(void *args);
struct nv_kthread_q
{
struct list_head q_list_head;
spinlock_t q_lock;
// This is a counting semaphore. It gets incremented and decremented
// exactly once for each item that is added to the queue.
struct semaphore q_sem;
atomic_t main_loop_should_exit;
struct task_struct *q_kthread;
};
struct nv_kthread_q_item
{
struct list_head q_list_node;
nv_q_func_t function_to_run;
void *function_args;
};
#if defined(NV_KTHREAD_CREATE_ON_NODE_PRESENT)
#define NV_KTHREAD_Q_SUPPORTS_AFFINITY() 1
#else
#define NV_KTHREAD_Q_SUPPORTS_AFFINITY() 0
#endif
#ifndef NUMA_NO_NODE
#define NUMA_NO_NODE (-1)
#endif
#define NV_KTHREAD_NO_NODE NUMA_NO_NODE
//
// The queue must not be used before calling this routine.
//
// The caller allocates an nv_kthread_q_t item. This routine initializes
// the queue, and starts up a kernel thread ("kthread") to service the queue.
// The queue will initially be empty; there is intentionally no way to
// pre-initialize the queue with items to run.
//
// In order to avoid external dependencies (specifically, NV_STATUS codes), this
// returns a Linux kernel (negative) errno on failure, and zero on success. It
// is safe to call nv_kthread_q_stop() on a queue that nv_kthread_q_init()
// failed for.
//
// A short prefix of the qname arg will show up in []'s, via the ps(1) utility.
//
// The kernel thread stack is preferably allocated on the specified NUMA node if
// NUMA-affinity (NV_KTHREAD_Q_SUPPORTS_AFFINITY() == 1) is supported, but
// fallback to another node is possible because kernel allocators do not
// guarantee affinity. Note that NUMA-affinity applies only to
// the kthread stack. This API does not do anything about limiting the CPU
// affinity of the kthread. That is left to the caller.
//
// On kernels, which do not support NUMA-aware kthread stack allocations
// (NV_KTHTREAD_Q_SUPPORTS_AFFINITY() == 0), the API will return -ENOTSUPP
// if the value supplied for 'preferred_node' is anything other than
// NV_KTHREAD_NO_NODE.
//
// Reusing a queue: once a queue is initialized, it must be safely shut down
// (see "Stopping the queue(s)", below), before it can be reused. So, for
// a simple queue use case, the following will work:
//
// nv_kthread_q_init_on_node(&some_q, "display_name", preferred_node);
// nv_kthread_q_stop(&some_q);
// nv_kthread_q_init_on_node(&some_q, "reincarnated", preferred_node);
// nv_kthread_q_stop(&some_q);
//
int nv_kthread_q_init_on_node(nv_kthread_q_t *q,
const char *qname,
int preferred_node);
//
// This routine is the same as nv_kthread_q_init_on_node() with the exception
// that the queue stack will be allocated on the NUMA node of the caller.
//
static inline int nv_kthread_q_init(nv_kthread_q_t *q, const char *qname)
{
return nv_kthread_q_init_on_node(q, qname, NV_KTHREAD_NO_NODE);
}
//
// The caller is responsible for stopping all queues, by calling this routine
// before, for example, kernel module unloading. This nv_kthread_q_stop()
// routine will flush the queue, and safely stop the kthread, before returning.
//
// You may ONLY call nv_kthread_q_stop() once, unless you reinitialize the
// queue in between, as shown in the nv_kthread_q_init() documentation, above.
//
// Do not add any more items to the queue after calling nv_kthread_q_stop.
//
// Calling nv_kthread_q_stop() on a queue which has been zero-initialized or
// for which nv_kthread_q_init() failed, is a no-op.
//
void nv_kthread_q_stop(nv_kthread_q_t *q);
//
// All items that were in the queue before nv_kthread_q_flush was called, and
// all items scheduled by those items, will get run before this function
// returns.
//
// You may NOT call nv_kthread_q_flush() after having called nv_kthread_q_stop.
//
// This actually flushes the queue twice. That ensures that the queue is fully
// flushed, for an important use case: rescheduling from within one's own
// callback. In order to do that safely, you need to:
//
// -- set a flag that tells the callback to stop rescheduling itself.
//
// -- call either nv_kthread_q_flush or nv_kthread_q_stop (which internally
// calls nv_kthread_q_flush). The nv_kthread_q_flush, in turn, actually
// flushes the queue *twice*. The first flush waits for any callbacks
// to finish, that missed seeing the "stop_rescheduling" flag. The
// second flush waits for callbacks that were already scheduled when the
// first flush finished.
//
void nv_kthread_q_flush(nv_kthread_q_t *q);
// Assigns function_to_run and function_args to the q_item.
//
// This must be called before calling nv_kthread_q_schedule_q_item.
void nv_kthread_q_item_init(nv_kthread_q_item_t *q_item,
nv_q_func_t function_to_run,
void *function_args);
//
// The caller must have already set up the queue, via nv_kthread_q_init().
// The caller owns the lifetime of the q_item. The nv_kthread_q system runs
// q_items, and adds or removes them from the queue. However, due to the first
// law of q-dynamics, it neither creates nor destroys q_items.
//
// When the callback (the function_to_run argument) is actually run, it is OK
// to free the q_item from within that routine. The nv_kthread_q system
// promises to be done with the q_item before that point.
//
// nv_kthread_q_schedule_q_item may be called from multiple threads at once,
// without danger of corrupting anything. This routine may also be safely
// called from interrupt context, including top-half ISRs.
//
// It is OK to reschedule the same q_item from within its own callback function.
//
// It is also OK to attempt to reschedule the same q_item, if that q_item is
// already pending in the queue. The q_item will not be rescheduled if it is
// already pending.
//
// Returns true (non-zero) if the item was actually scheduled. Returns false if
// the item was not scheduled, which can happen if:
//
// -- The q_item was already pending in a queue, or
// -- The queue is shutting down (or not yet started up).
//
int nv_kthread_q_schedule_q_item(nv_kthread_q_t *q,
nv_kthread_q_item_t *q_item);
// Built-in test. Returns -1 if any subtest failed, or 0 upon success.
int nv_kthread_q_run_self_test(void);
#endif // __NV_KTHREAD_QUEUE_H__

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,93 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2013-2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_LIST_HELPERS_H__
#define __NV_LIST_HELPERS_H__
#include <linux/list.h>
#include "conftest.h"
/*
* list_first_entry_or_null added by commit 6d7581e62f8b ("list: introduce
* list_first_entry_or_null") in v3.10 (2013-05-29).
*/
#if !defined(list_first_entry_or_null)
#define list_first_entry_or_null(ptr, type, member) \
(!list_empty(ptr) ? list_first_entry(ptr, type, member) : NULL)
#endif
/*
* Added by commit 93be3c2eb337 ("list: introduce list_last_entry(), use
* list_{first,last}_entry()") in v3.13 (2013-11-12).
*/
#if !defined(list_last_entry)
#define list_last_entry(ptr, type, member) \
list_entry((ptr)->prev, type, member)
#endif
/* list_last_entry_or_null() doesn't actually exist in the kernel */
#if !defined(list_last_entry_or_null)
#define list_last_entry_or_null(ptr, type, member) \
(!list_empty(ptr) ? list_last_entry(ptr, type, member) : NULL)
#endif
/*
* list_prev_entry() and list_next_entry added by commit 008208c6b26f
* ("list: introduce list_next_entry() and list_prev_entry()") in
* v3.13 (2013-11-12).
*/
#if !defined(list_prev_entry)
#define list_prev_entry(pos, member) \
list_entry((pos)->member.prev, typeof(*(pos)), member)
#endif
#if !defined(list_next_entry)
#define list_next_entry(pos, member) \
list_entry((pos)->member.next, typeof(*(pos)), member)
#endif
#if !defined(NV_LIST_IS_FIRST_PRESENT)
static inline int list_is_first(const struct list_head *list,
const struct list_head *head)
{
return list->prev == head;
}
#endif
#if defined(NV_HLIST_FOR_EACH_ENTRY_ARGUMENT_COUNT)
#if NV_HLIST_FOR_EACH_ENTRY_ARGUMENT_COUNT == 3
#define nv_hlist_for_each_entry(pos, head, member) \
hlist_for_each_entry(pos, head, member)
#else
#if !defined(hlist_entry_safe)
#define hlist_entry_safe(ptr, type, member) \
(ptr) ? hlist_entry(ptr, type, member) : NULL
#endif
#define nv_hlist_for_each_entry(pos, head, member) \
for (pos = hlist_entry_safe((head)->first, typeof(*(pos)), member); \
pos; \
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
#endif
#endif // NV_HLIST_FOR_EACH_ENTRY_ARGUMENT_COUNT
#endif // __NV_LIST_HELPERS_H__

View File

@@ -0,0 +1,92 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_LOCK_H_
#define _NV_LOCK_H_
#include "conftest.h"
#include <linux/spinlock.h>
#include <linux/rwsem.h>
#include <linux/sched.h> /* signal_pending, cond_resched */
#if defined(NV_LINUX_SCHED_SIGNAL_H_PRESENT)
#include <linux/sched/signal.h> /* signal_pending for kernels >= 4.11 */
#endif
#if defined(NV_LINUX_SEMAPHORE_H_PRESENT)
#include <linux/semaphore.h>
#else
#include <asm/semaphore.h>
#endif
#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_PREEMPT_RT_FULL)
typedef raw_spinlock_t nv_spinlock_t;
#define NV_SPIN_LOCK_INIT(lock) raw_spin_lock_init(lock)
#define NV_SPIN_LOCK_IRQ(lock) raw_spin_lock_irq(lock)
#define NV_SPIN_UNLOCK_IRQ(lock) raw_spin_unlock_irq(lock)
#define NV_SPIN_LOCK_IRQSAVE(lock,flags) raw_spin_lock_irqsave(lock,flags)
#define NV_SPIN_UNLOCK_IRQRESTORE(lock,flags) raw_spin_unlock_irqrestore(lock,flags)
#define NV_SPIN_LOCK(lock) raw_spin_lock(lock)
#define NV_SPIN_UNLOCK(lock) raw_spin_unlock(lock)
#define NV_SPIN_UNLOCK_WAIT(lock) raw_spin_unlock_wait(lock)
#else
typedef spinlock_t nv_spinlock_t;
#define NV_SPIN_LOCK_INIT(lock) spin_lock_init(lock)
#define NV_SPIN_LOCK_IRQ(lock) spin_lock_irq(lock)
#define NV_SPIN_UNLOCK_IRQ(lock) spin_unlock_irq(lock)
#define NV_SPIN_LOCK_IRQSAVE(lock,flags) spin_lock_irqsave(lock,flags)
#define NV_SPIN_UNLOCK_IRQRESTORE(lock,flags) spin_unlock_irqrestore(lock,flags)
#define NV_SPIN_LOCK(lock) spin_lock(lock)
#define NV_SPIN_UNLOCK(lock) spin_unlock(lock)
#define NV_SPIN_UNLOCK_WAIT(lock) spin_unlock_wait(lock)
#endif
#if defined(NV_CONFIG_PREEMPT_RT)
#define NV_INIT_SEMA(sema, val) sema_init(sema,val)
#else
#if !defined(__SEMAPHORE_INITIALIZER) && defined(__COMPAT_SEMAPHORE_INITIALIZER)
#define __SEMAPHORE_INITIALIZER __COMPAT_SEMAPHORE_INITIALIZER
#endif
#define NV_INIT_SEMA(sema, val) \
{ \
struct semaphore __sema = \
__SEMAPHORE_INITIALIZER(*(sema), val); \
*(sema) = __sema; \
}
#endif
#define NV_INIT_MUTEX(mutex) NV_INIT_SEMA(mutex, 1)
static inline int nv_down_read_interruptible(struct rw_semaphore *lock)
{
while (!down_read_trylock(lock))
{
if (signal_pending(current))
return -EINTR;
cond_resched();
}
return 0;
}
#endif /* _NV_LOCK_H_ */

View File

@@ -0,0 +1,49 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NVMEMDBG_H_
#define _NVMEMDBG_H_
#include <nvtypes.h>
void nv_memdbg_init(void);
void nv_memdbg_add(void *addr, NvU64 size, const char *file, int line);
void nv_memdbg_remove(void *addr, NvU64 size, const char *file, int line);
void nv_memdbg_exit(void);
#if defined(NV_MEM_LOGGER)
#define NV_MEMDBG_ADD(ptr, size) \
nv_memdbg_add(ptr, size, __FILE__, __LINE__)
#define NV_MEMDBG_REMOVE(ptr, size) \
nv_memdbg_remove(ptr, size, __FILE__, __LINE__)
#else
#define NV_MEMDBG_ADD(ptr, size)
#define NV_MEMDBG_REMOVE(ptr, size)
#endif /* NV_MEM_LOGGER */
#endif /* _NVMEMDBG_H_ */

View File

@@ -0,0 +1,273 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2016-2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_MM_H__
#define __NV_MM_H__
#include "conftest.h"
#if !defined(NV_VM_FAULT_T_IS_PRESENT)
typedef int vm_fault_t;
#endif
/* get_user_pages
*
* The 8-argument version of get_user_pages was deprecated by commit
* (2016 Feb 12: cde70140fed8429acf7a14e2e2cbd3e329036653)for the non-remote case
* (calling get_user_pages with current and current->mm).
*
* Completely moved to the 6 argument version of get_user_pages -
* 2016 Apr 4: c12d2da56d0e07d230968ee2305aaa86b93a6832
*
* write and force parameters were replaced with gup_flags by -
* 2016 Oct 12: 768ae309a96103ed02eb1e111e838c87854d8b51
*
* A 7-argument version of get_user_pages was introduced into linux-4.4.y by
* commit 8e50b8b07f462ab4b91bc1491b1c91bd75e4ad40 which cherry-picked the
* replacement of the write and force parameters with gup_flags
*
*/
#if defined(NV_GET_USER_PAGES_HAS_TASK_STRUCT)
#if defined(NV_GET_USER_PAGES_HAS_WRITE_AND_FORCE_ARGS)
#define NV_GET_USER_PAGES(start, nr_pages, write, force, pages, vmas) \
get_user_pages(current, current->mm, start, nr_pages, write, force, pages, vmas)
#else
#include <linux/mm.h>
#include <linux/sched.h>
static inline long NV_GET_USER_PAGES(unsigned long start,
unsigned long nr_pages,
int write,
int force,
struct page **pages,
struct vm_area_struct **vmas)
{
unsigned int flags = 0;
if (write)
flags |= FOLL_WRITE;
if (force)
flags |= FOLL_FORCE;
return get_user_pages(current, current->mm, start, nr_pages, flags,
pages, vmas);
}
#endif
#else
#if defined(NV_GET_USER_PAGES_HAS_WRITE_AND_FORCE_ARGS)
#define NV_GET_USER_PAGES get_user_pages
#else
#include <linux/mm.h>
static inline long NV_GET_USER_PAGES(unsigned long start,
unsigned long nr_pages,
int write,
int force,
struct page **pages,
struct vm_area_struct **vmas)
{
unsigned int flags = 0;
if (write)
flags |= FOLL_WRITE;
if (force)
flags |= FOLL_FORCE;
return get_user_pages(start, nr_pages, flags, pages, vmas);
}
#endif
#endif
/*
* get_user_pages_remote() was added by commit 1e9877902dc7
* ("mm/gup: Introduce get_user_pages_remote()") in v4.6 (2016-02-12).
*
* The very next commit cde70140fed8 ("mm/gup: Overload get_user_pages()
* functions") deprecated the 8-argument version of get_user_pages for the
* non-remote case (calling get_user_pages with current and current->mm).
*
* The guidelines are: call NV_GET_USER_PAGES_REMOTE if you need the 8-argument
* version that uses something other than current and current->mm. Use
* NV_GET_USER_PAGES if you are refering to current and current->mm.
*
* Note that get_user_pages_remote() requires the caller to hold a reference on
* the task_struct (if non-NULL and if this API has tsk argument) and the mm_struct.
* This will always be true when using current and current->mm. If the kernel passes
* the driver a vma via driver callback, the kernel holds a reference on vma->vm_mm
* over that callback.
*
* get_user_pages_remote() write/force parameters were replaced
* with gup_flags by commit 9beae1ea8930 ("mm: replace get_user_pages_remote()
* write/force parameters with gup_flags") in v4.9 (2016-10-13).
*
* get_user_pages_remote() added 'locked' parameter by commit 5b56d49fc31d
* ("mm: add locked parameter to get_user_pages_remote()") in
* v4.10 (2016-12-14).
*
* get_user_pages_remote() removed 'tsk' parameter by
* commit 64019a2e467a ("mm/gup: remove task_struct pointer for
* all gup code") in v5.9-rc1 (2020-08-11).
*
*/
#if defined(NV_GET_USER_PAGES_REMOTE_PRESENT)
#if defined(NV_GET_USER_PAGES_REMOTE_HAS_WRITE_AND_FORCE_ARGS)
#define NV_GET_USER_PAGES_REMOTE get_user_pages_remote
#else
static inline long NV_GET_USER_PAGES_REMOTE(struct task_struct *tsk,
struct mm_struct *mm,
unsigned long start,
unsigned long nr_pages,
int write,
int force,
struct page **pages,
struct vm_area_struct **vmas)
{
unsigned int flags = 0;
if (write)
flags |= FOLL_WRITE;
if (force)
flags |= FOLL_FORCE;
#if defined(NV_GET_USER_PAGES_REMOTE_HAS_LOCKED_ARG)
#if defined (NV_GET_USER_PAGES_REMOTE_HAS_TSK_ARG)
return get_user_pages_remote(tsk, mm, start, nr_pages, flags,
pages, vmas, NULL);
#else
return get_user_pages_remote(mm, start, nr_pages, flags,
pages, vmas, NULL);
#endif
#else
return get_user_pages_remote(tsk, mm, start, nr_pages, flags,
pages, vmas);
#endif
}
#endif
#else
#if defined(NV_GET_USER_PAGES_HAS_WRITE_AND_FORCE_ARGS)
#define NV_GET_USER_PAGES_REMOTE get_user_pages
#else
#include <linux/mm.h>
#include <linux/sched.h>
static inline long NV_GET_USER_PAGES_REMOTE(struct task_struct *tsk,
struct mm_struct *mm,
unsigned long start,
unsigned long nr_pages,
int write,
int force,
struct page **pages,
struct vm_area_struct **vmas)
{
unsigned int flags = 0;
if (write)
flags |= FOLL_WRITE;
if (force)
flags |= FOLL_FORCE;
return get_user_pages(tsk, mm, start, nr_pages, flags, pages, vmas);
}
#endif
#endif
/*
* The .virtual_address field was effectively renamed to .address, by these
* two commits:
*
* struct vm_fault: .address was added by:
* 2016-12-14 82b0f8c39a3869b6fd2a10e180a862248736ec6f
*
* struct vm_fault: .virtual_address was removed by:
* 2016-12-14 1a29d85eb0f19b7d8271923d8917d7b4f5540b3e
*/
static inline unsigned long nv_page_fault_va(struct vm_fault *vmf)
{
#if defined(NV_VM_FAULT_HAS_ADDRESS)
return vmf->address;
#else
return (unsigned long)(vmf->virtual_address);
#endif
}
static inline void nv_mmap_read_lock(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
mmap_read_lock(mm);
#else
down_read(&mm->mmap_sem);
#endif
}
static inline void nv_mmap_read_unlock(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
mmap_read_unlock(mm);
#else
up_read(&mm->mmap_sem);
#endif
}
static inline void nv_mmap_write_lock(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
mmap_write_lock(mm);
#else
down_write(&mm->mmap_sem);
#endif
}
static inline void nv_mmap_write_unlock(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
mmap_write_unlock(mm);
#else
up_write(&mm->mmap_sem);
#endif
}
static inline int nv_mm_rwsem_is_locked(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
return rwsem_is_locked(&mm->mmap_lock);
#else
return rwsem_is_locked(&mm->mmap_sem);
#endif
}
static inline struct rw_semaphore *nv_mmap_get_lock(struct mm_struct *mm)
{
#if defined(NV_MM_HAS_MMAP_LOCK)
return &mm->mmap_lock;
#else
return &mm->mmap_sem;
#endif
}
#endif // __NV_MM_H__

View File

@@ -0,0 +1,122 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_MODESET_INTERFACE_H_
#define _NV_MODESET_INTERFACE_H_
/*
* This file defines the interface between the nvidia and
* nvidia-modeset UNIX kernel modules.
*
* The nvidia-modeset kernel module calls the nvidia kernel module's
* nvidia_get_rm_ops() function to get the RM API function pointers
* which it will need.
*/
#include "nvstatus.h"
#include "nv-gpu-info.h"
/*
* nvidia_stack_s is defined in nv.h, which pulls in a lot of other
* dependencies. The nvidia-modeset kernel module doesn't need to
* dereference the nvidia_stack_s pointer, so just treat is as an
* opaque pointer for purposes of this API definition.
*/
typedef struct nvidia_stack_s *nvidia_modeset_stack_ptr;
/*
* Callback functions from the RM OS interface layer into the NVKMS OS interface
* layer.
*
* These functions should be called without the RM lock held, using the kernel's
* native calling convention.
*/
typedef struct {
/*
* Suspend & resume callbacks. Note that these are called once per GPU.
*/
void (*suspend)(NvU32 gpu_id);
void (*resume)(NvU32 gpu_id);
} nvidia_modeset_callbacks_t;
/*
* The RM API entry points which the nvidia-modeset kernel module should
* call in the nvidia kernel module.
*/
typedef struct {
/*
* The nvidia-modeset kernel module should assign version_string
* before passing the structure to the nvidia kernel module, so
* that a version match can be confirmed: it is not supported to
* mix nvidia and nvidia-modeset kernel modules from different
* releases.
*/
const char *version_string;
/*
* Return system information.
*/
struct {
/* Availability of write combining support for video memory */
NvBool allow_write_combining;
} system_info;
/*
* Allocate and free an nvidia_stack_t to pass into
* nvidia_modeset_rm_ops_t::op(). An nvidia_stack_t must only be
* used by one thread at a time.
*
* Note that on architectures where an alternate stack is not
* used, alloc_stack() will set sp=NULL even when it returns 0
* (success). I.e., check the return value, not the sp value.
*/
int (*alloc_stack)(nvidia_modeset_stack_ptr *sp);
void (*free_stack)(nvidia_modeset_stack_ptr sp);
/*
* Enumerate list of gpus probed by nvidia driver.
*
* gpu_info is an array of NVIDIA_MAX_GPUS elements. The number of GPUs
* in the system is returned.
*/
NvU32 (*enumerate_gpus)(nv_gpu_info_t *gpu_info);
/*
* {open,close}_gpu() raise and lower the reference count of the
* specified GPU. This is equivalent to opening and closing a
* /dev/nvidiaN device file from user-space.
*/
int (*open_gpu)(NvU32 gpu_id, nvidia_modeset_stack_ptr sp);
void (*close_gpu)(NvU32 gpu_id, nvidia_modeset_stack_ptr sp);
void (*op)(nvidia_modeset_stack_ptr sp, void *ops_cmd);
int (*set_callbacks)(const nvidia_modeset_callbacks_t *cb);
} nvidia_modeset_rm_ops_t;
NV_STATUS nvidia_get_rm_ops(nvidia_modeset_rm_ops_t *rm_ops);
#endif /* _NV_MODESET_INTERFACE_H_ */

View File

@@ -0,0 +1,115 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2018 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_MSI_H_
#define _NV_MSI_H_
#include "nv-linux.h"
#if (defined(CONFIG_X86_LOCAL_APIC) || defined(NVCPU_AARCH64) || \
defined(NVCPU_PPC64LE)) && \
(defined(CONFIG_PCI_MSI) || defined(CONFIG_PCI_USE_VECTOR))
#define NV_LINUX_PCIE_MSI_SUPPORTED
#endif
#if !defined(NV_LINUX_PCIE_MSI_SUPPORTED) || !defined(CONFIG_PCI_MSI)
#define NV_PCI_DISABLE_MSI(pci_dev)
#else
#define NV_PCI_DISABLE_MSI(pci_dev) pci_disable_msi(pci_dev)
#endif
irqreturn_t nvidia_isr (int, void *);
irqreturn_t nvidia_isr_msix (int, void *);
irqreturn_t nvidia_isr_kthread_bh (int, void *);
irqreturn_t nvidia_isr_msix_kthread_bh(int, void *);
#if defined(NV_LINUX_PCIE_MSI_SUPPORTED)
void NV_API_CALL nv_init_msi (nv_state_t *);
void NV_API_CALL nv_init_msix (nv_state_t *);
NvS32 NV_API_CALL nv_request_msix_irq (nv_linux_state_t *);
#define NV_PCI_MSIX_FLAGS 2
#define NV_PCI_MSIX_FLAGS_QSIZE 0x7FF
static inline void nv_free_msix_irq(nv_linux_state_t *nvl)
{
int i;
for (i = 0; i < nvl->num_intr; i++)
{
free_irq(nvl->msix_entries[i].vector, (void *)nvl);
}
}
static inline int nv_get_max_irq(struct pci_dev *pci_dev)
{
int nvec;
int cap_ptr;
NvU16 ctrl;
cap_ptr = pci_find_capability(pci_dev, PCI_CAP_ID_MSIX);
/*
* The 'PCI_MSIX_FLAGS' was added in 2.6.21-rc3 by:
* 2007-03-05 f5f2b13129a6541debf8851bae843cbbf48298b7
*/
#if defined(PCI_MSIX_FLAGS)
pci_read_config_word(pci_dev, cap_ptr + PCI_MSIX_FLAGS, &ctrl);
nvec = (ctrl & PCI_MSIX_FLAGS_QSIZE) + 1;
#else
pci_read_config_word(pci_dev, cap_ptr + NV_PCI_MSIX_FLAGS, &ctrl);
nvec = (ctrl & NV_PCI_MSIX_FLAGS_QSIZE) + 1;
#endif
return nvec;
}
static inline int nv_pci_enable_msix(nv_linux_state_t *nvl, int nvec)
{
int rc = 0;
/*
* pci_enable_msix_range() replaced pci_enable_msix() in 3.14-rc1:
* 2014-01-03 302a2523c277bea0bbe8340312b09507905849ed
*/
#if defined(NV_PCI_ENABLE_MSIX_RANGE_PRESENT)
// We require all the vectors we are requesting so use the same min and max
rc = pci_enable_msix_range(nvl->pci_dev, nvl->msix_entries, nvec, nvec);
if (rc < 0)
{
return NV_ERR_OPERATING_SYSTEM;
}
WARN_ON(nvec != rc);
#else
rc = pci_enable_msix(nvl->pci_dev, nvl->msix_entries, nvec);
if (rc != 0)
{
return NV_ERR_OPERATING_SYSTEM;
}
#endif
nvl->num_intr = nvec;
return NV_OK;
}
#endif
#endif /* _NV_MSI_H_ */

View File

@@ -0,0 +1,36 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PCI_TYPES_H_
#define _NV_PCI_TYPES_H_
#include <linux/pci.h>
#include "conftest.h"
#if defined(NV_PCI_CHANNEL_STATE_PRESENT)
typedef enum pci_channel_state nv_pci_channel_state_t;
#else
typedef pci_channel_state_t nv_pci_channel_state_t;
#endif
#endif

View File

@@ -0,0 +1,48 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2019 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PCI_H_
#define _NV_PCI_H_
#include <linux/pci.h>
#include "nv-linux.h"
#if defined(NV_DEV_IS_PCI_PRESENT)
#define nv_dev_is_pci(dev) dev_is_pci(dev)
#else
/*
* Non-PCI devices are only supported on kernels which expose the
* dev_is_pci() function. For older kernels, we only support PCI
* devices, hence returning true to take all the PCI code paths.
*/
#define nv_dev_is_pci(dev) (true)
#endif
int nv_pci_register_driver(void);
void nv_pci_unregister_driver(void);
int nv_pci_count_devices(void);
NvU8 nv_find_pci_capability(struct pci_dev *, NvU8);
int nvidia_dev_get_pci_info(const NvU8 *, struct pci_dev **, NvU64 *, NvU64 *);
nv_linux_state_t * find_pci(NvU32, NvU8, NvU8, NvU8);
#endif

View File

@@ -0,0 +1,134 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_PGPROT_H__
#define __NV_PGPROT_H__
#include "cpuopsys.h"
#include <linux/mm.h>
#if !defined(NV_VMWARE)
#if defined(NVCPU_X86_64)
/* mark memory UC-, rather than UC (don't use _PAGE_PWT) */
static inline pgprot_t pgprot_noncached_weak(pgprot_t old_prot)
{
pgprot_t new_prot = old_prot;
if (boot_cpu_data.x86 > 3)
new_prot = __pgprot(pgprot_val(old_prot) | _PAGE_PCD);
return new_prot;
}
#if !defined (pgprot_noncached)
static inline pgprot_t pgprot_noncached(pgprot_t old_prot)
{
pgprot_t new_prot = old_prot;
if (boot_cpu_data.x86 > 3)
new_prot = __pgprot(pgprot_val(old_prot) | _PAGE_PCD | _PAGE_PWT);
return new_prot;
}
#endif
static inline pgprot_t pgprot_modify_writecombine(pgprot_t old_prot)
{
pgprot_t new_prot = old_prot;
pgprot_val(new_prot) &= ~(_PAGE_PSE | _PAGE_PCD | _PAGE_PWT);
new_prot = __pgprot(pgprot_val(new_prot) | _PAGE_PWT);
return new_prot;
}
#endif /* defined(NVCPU_X86_64) */
#endif /* !defined(NV_VMWARE) */
#if defined(NVCPU_AARCH64)
/*
* Don't rely on the kernel's definition of pgprot_noncached(), as on 64-bit
* ARM that's not for system memory, but device memory instead. For I/O cache
* coherent systems, use cached mappings instead of uncached.
*/
#define NV_PGPROT_UNCACHED(old_prot) \
((nvos_is_chipset_io_coherent()) ? \
(old_prot) : \
__pgprot_modify((old_prot), PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)))
#elif defined(NVCPU_PPC64LE)
/* Don't attempt to mark sysmem pages as uncached on ppc64le */
#define NV_PGPROT_UNCACHED(old_prot) old_prot
#else
#define NV_PGPROT_UNCACHED(old_prot) pgprot_noncached(old_prot)
#endif
#define NV_PGPROT_UNCACHED_DEVICE(old_prot) pgprot_noncached(old_prot)
#if defined(NVCPU_AARCH64)
#if defined(NV_MT_DEVICE_GRE_PRESENT)
#define NV_PROT_WRITE_COMBINED_DEVICE (PROT_DEFAULT | PTE_PXN | PTE_UXN | \
PTE_ATTRINDX(MT_DEVICE_GRE))
#else
#define NV_PROT_WRITE_COMBINED_DEVICE (PROT_DEFAULT | PTE_PXN | PTE_UXN | \
PTE_ATTRINDX(MT_DEVICE_nGnRE))
#endif
#define NV_PGPROT_WRITE_COMBINED_DEVICE(old_prot) \
__pgprot_modify(old_prot, PTE_ATTRINDX_MASK, NV_PROT_WRITE_COMBINED_DEVICE)
#define NV_PGPROT_WRITE_COMBINED(old_prot) NV_PGPROT_UNCACHED(old_prot)
#define NV_PGPROT_READ_ONLY(old_prot) \
__pgprot_modify(old_prot, 0, PTE_RDONLY)
#elif defined(NVCPU_X86_64)
#define NV_PGPROT_UNCACHED_WEAK(old_prot) pgprot_noncached_weak(old_prot)
#define NV_PGPROT_WRITE_COMBINED_DEVICE(old_prot) \
pgprot_modify_writecombine(old_prot)
#define NV_PGPROT_WRITE_COMBINED(old_prot) \
NV_PGPROT_WRITE_COMBINED_DEVICE(old_prot)
#define NV_PGPROT_READ_ONLY(old_prot) \
__pgprot(pgprot_val((old_prot)) & ~_PAGE_RW)
#elif defined(NVCPU_PPC64LE)
/*
* Some kernels use H_PAGE instead of _PAGE
*/
#if defined(_PAGE_RW)
#define NV_PAGE_RW _PAGE_RW
#elif defined(H_PAGE_RW)
#define NV_PAGE_RW H_PAGE_RW
#else
#warning "The kernel does not provide page protection defines!"
#endif
#if defined(_PAGE_4K_PFN)
#define NV_PAGE_4K_PFN _PAGE_4K_PFN
#elif defined(H_PAGE_4K_PFN)
#define NV_PAGE_4K_PFN H_PAGE_4K_PFN
#else
#undef NV_PAGE_4K_PFN
#endif
#define NV_PGPROT_WRITE_COMBINED_DEVICE(old_prot) \
pgprot_writecombine(old_prot)
/* Don't attempt to mark sysmem pages as write combined on ppc64le */
#define NV_PGPROT_WRITE_COMBINED(old_prot) old_prot
#define NV_PGPROT_READ_ONLY(old_prot) \
__pgprot(pgprot_val((old_prot)) & ~NV_PAGE_RW)
#else
/* Writecombine is not supported */
#undef NV_PGPROT_WRITE_COMBINED_DEVICE(old_prot)
#undef NV_PGPROT_WRITE_COMBINED(old_prot)
#define NV_PGPROT_READ_ONLY(old_prot)
#endif
#endif /* __NV_PGPROT_H__ */

View File

@@ -0,0 +1,227 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PROCFS_UTILS_H
#define _NV_PROCFS_UTILS_H
#include "conftest.h"
#ifdef CONFIG_PROC_FS
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
/*
* Allow procfs to create file to exercise error forwarding.
* This is supported by CRAY platforms.
*/
#if defined(CONFIG_CRAY_XT)
#define EXERCISE_ERROR_FORWARDING NV_TRUE
#else
#define EXERCISE_ERROR_FORWARDING NV_FALSE
#endif
#define IS_EXERCISE_ERROR_FORWARDING_ENABLED() (EXERCISE_ERROR_FORWARDING)
#if defined(NV_PROC_OPS_PRESENT)
typedef struct proc_ops nv_proc_ops_t;
#define NV_PROC_OPS_SET_OWNER()
#define NV_PROC_OPS_OPEN proc_open
#define NV_PROC_OPS_READ proc_read
#define NV_PROC_OPS_WRITE proc_write
#define NV_PROC_OPS_LSEEK proc_lseek
#define NV_PROC_OPS_RELEASE proc_release
#else
typedef struct file_operations nv_proc_ops_t;
#define NV_PROC_OPS_SET_OWNER() .owner = THIS_MODULE,
#define NV_PROC_OPS_OPEN open
#define NV_PROC_OPS_READ read
#define NV_PROC_OPS_WRITE write
#define NV_PROC_OPS_LSEEK llseek
#define NV_PROC_OPS_RELEASE release
#endif
#define NV_CREATE_PROC_FILE(filename,parent,__name,__data) \
({ \
struct proc_dir_entry *__entry; \
int mode = (S_IFREG | S_IRUGO); \
const nv_proc_ops_t *fops = &nv_procfs_##__name##_fops; \
if (fops->NV_PROC_OPS_WRITE != 0) \
mode |= S_IWUSR; \
__entry = proc_create_data(filename, mode, parent, fops, __data);\
__entry; \
})
/*
* proc_mkdir_mode exists in Linux 2.6.9, but isn't exported until Linux 3.0.
* Use the older interface instead unless the newer interface is necessary.
*/
#if defined(NV_PROC_REMOVE_PRESENT)
# define NV_PROC_MKDIR_MODE(name, mode, parent) \
proc_mkdir_mode(name, mode, parent)
#else
# define NV_PROC_MKDIR_MODE(name, mode, parent) \
({ \
struct proc_dir_entry *__entry; \
__entry = create_proc_entry(name, mode, parent); \
__entry; \
})
#endif
#define NV_CREATE_PROC_DIR(name,parent) \
({ \
struct proc_dir_entry *__entry; \
int mode = (S_IFDIR | S_IRUGO | S_IXUGO); \
__entry = NV_PROC_MKDIR_MODE(name, mode, parent); \
__entry; \
})
#if defined(NV_PDE_DATA_LOWER_CASE_PRESENT)
#define NV_PDE_DATA(inode) pde_data(inode)
#else
#define NV_PDE_DATA(inode) PDE_DATA(inode)
#endif
#if defined(NV_PROC_REMOVE_PRESENT)
# define NV_REMOVE_PROC_ENTRY(entry) \
proc_remove(entry);
#else
# define NV_REMOVE_PROC_ENTRY(entry) \
remove_proc_entry(entry->name, entry->parent);
#endif
void nv_procfs_unregister_all(struct proc_dir_entry *entry,
struct proc_dir_entry *delimiter);
#define NV_DEFINE_SINGLE_PROCFS_FILE_HELPER(name, lock) \
static int nv_procfs_open_##name( \
struct inode *inode, \
struct file *filep \
) \
{ \
int ret; \
ret = single_open(filep, nv_procfs_read_##name, \
NV_PDE_DATA(inode)); \
if (ret < 0) \
{ \
return ret; \
} \
ret = nv_down_read_interruptible(&lock); \
if (ret < 0) \
{ \
single_release(inode, filep); \
} \
return ret; \
} \
\
static int nv_procfs_release_##name( \
struct inode *inode, \
struct file *filep \
) \
{ \
up_read(&lock); \
return single_release(inode, filep); \
}
#define NV_DEFINE_SINGLE_PROCFS_FILE_READ_ONLY(name, lock) \
NV_DEFINE_SINGLE_PROCFS_FILE_HELPER(name, lock) \
\
static const nv_proc_ops_t nv_procfs_##name##_fops = { \
NV_PROC_OPS_SET_OWNER() \
.NV_PROC_OPS_OPEN = nv_procfs_open_##name, \
.NV_PROC_OPS_READ = seq_read, \
.NV_PROC_OPS_LSEEK = seq_lseek, \
.NV_PROC_OPS_RELEASE = nv_procfs_release_##name, \
};
#define NV_DEFINE_SINGLE_PROCFS_FILE_READ_WRITE(name, lock, \
write_callback) \
NV_DEFINE_SINGLE_PROCFS_FILE_HELPER(name, lock) \
\
static ssize_t nv_procfs_write_##name( \
struct file *file, \
const char __user *buf, \
size_t size, \
loff_t *ppos \
) \
{ \
ssize_t ret; \
struct seq_file *s; \
\
s = file->private_data; \
if (s == NULL) \
{ \
return -EIO; \
} \
\
ret = write_callback(s, buf + *ppos, size - *ppos); \
if (ret == 0) \
{ \
/* avoid infinite loop */ \
ret = -EIO; \
} \
return ret; \
} \
\
static const nv_proc_ops_t nv_procfs_##name##_fops = { \
NV_PROC_OPS_SET_OWNER() \
.NV_PROC_OPS_OPEN = nv_procfs_open_##name, \
.NV_PROC_OPS_READ = seq_read, \
.NV_PROC_OPS_WRITE = nv_procfs_write_##name, \
.NV_PROC_OPS_LSEEK = seq_lseek, \
.NV_PROC_OPS_RELEASE = nv_procfs_release_##name, \
};
#define NV_DEFINE_SINGLE_PROCFS_FILE_READ_ONLY_WITHOUT_LOCK(name) \
static int nv_procfs_open_##name( \
struct inode *inode, \
struct file *filep \
) \
{ \
int ret; \
ret = single_open(filep, nv_procfs_read_##name, \
NV_PDE_DATA(inode)); \
return ret; \
} \
\
static int nv_procfs_release_##name( \
struct inode *inode, \
struct file *filep \
) \
{ \
return single_release(inode, filep); \
} \
\
static const nv_proc_ops_t nv_procfs_##name##_fops = { \
NV_PROC_OPS_SET_OWNER() \
.NV_PROC_OPS_OPEN = nv_procfs_open_##name, \
.NV_PROC_OPS_READ = seq_read, \
.NV_PROC_OPS_LSEEK = seq_lseek, \
.NV_PROC_OPS_RELEASE = nv_procfs_release_##name, \
};
#endif /* CONFIG_PROC_FS */
#endif /* _NV_PROCFS_UTILS_H */

View File

@@ -0,0 +1,28 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PROCFS_H
#define _NV_PROCFS_H
#include "nv-procfs-utils.h"
#endif /* _NV_PROCFS_H */

View File

@@ -0,0 +1,100 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1999-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PROTO_H_
#define _NV_PROTO_H_
#include "nv-pci.h"
#include "nv-register-module.h"
extern const char *nv_device_name;
extern nvidia_module_t nv_fops;
void nv_acpi_register_notifier (nv_linux_state_t *);
void nv_acpi_unregister_notifier (nv_linux_state_t *);
int nv_acpi_init (void);
int nv_acpi_uninit (void);
NvU8 nv_find_pci_capability (struct pci_dev *, NvU8);
int nv_procfs_init (void);
void nv_procfs_exit (void);
void nv_procfs_add_warning (const char *, const char *);
int nv_procfs_add_gpu (nv_linux_state_t *);
void nv_procfs_remove_gpu (nv_linux_state_t *);
int nvidia_mmap (struct file *, struct vm_area_struct *);
int nvidia_mmap_helper (nv_state_t *, nv_linux_file_private_t *, nvidia_stack_t *, struct vm_area_struct *, void *);
int nv_encode_caching (pgprot_t *, NvU32, NvU32);
void nv_revoke_gpu_mappings_locked(nv_state_t *);
NvUPtr nv_vm_map_pages (struct page **, NvU32, NvBool, NvBool);
void nv_vm_unmap_pages (NvUPtr, NvU32);
NV_STATUS nv_alloc_contig_pages (nv_state_t *, nv_alloc_t *);
void nv_free_contig_pages (nv_alloc_t *);
NV_STATUS nv_alloc_system_pages (nv_state_t *, nv_alloc_t *);
void nv_free_system_pages (nv_alloc_t *);
void nv_address_space_init_once (struct address_space *mapping);
int nv_uvm_init (void);
void nv_uvm_exit (void);
NV_STATUS nv_uvm_suspend (void);
NV_STATUS nv_uvm_resume (void);
void nv_uvm_notify_start_device (const NvU8 *uuid);
void nv_uvm_notify_stop_device (const NvU8 *uuid);
NV_STATUS nv_uvm_event_interrupt (const NvU8 *uuid);
/* Move these to nv.h once implemented by other UNIX platforms */
NvBool nvidia_get_gpuid_list (NvU32 *gpu_ids, NvU32 *gpu_count);
int nvidia_dev_get (NvU32, nvidia_stack_t *);
void nvidia_dev_put (NvU32, nvidia_stack_t *);
int nvidia_dev_get_uuid (const NvU8 *, nvidia_stack_t *);
void nvidia_dev_put_uuid (const NvU8 *, nvidia_stack_t *);
int nvidia_dev_block_gc6 (const NvU8 *, nvidia_stack_t *);
int nvidia_dev_unblock_gc6 (const NvU8 *, nvidia_stack_t *);
#if defined(CONFIG_PM)
NV_STATUS nv_set_system_power_state (nv_power_state_t, nv_pm_action_depth_t);
#endif
void nvidia_modeset_suspend (NvU32 gpuId);
void nvidia_modeset_resume (NvU32 gpuId);
NvBool nv_is_uuid_in_gpu_exclusion_list (const char *);
NV_STATUS nv_parse_per_device_option_string(nvidia_stack_t *sp);
nv_linux_state_t * find_uuid(const NvU8 *uuid);
void nv_report_error(struct pci_dev *dev, NvU32 error_number, const char *format, va_list ap);
void nv_shutdown_adapter(nvidia_stack_t *, nv_state_t *, nv_linux_state_t *);
void nv_dev_free_stacks(nv_linux_state_t *);
NvBool nv_lock_init_locks(nvidia_stack_t *, nv_state_t *);
void nv_lock_destroy_locks(nvidia_stack_t *, nv_state_t *);
void nv_linux_add_device_locked(nv_linux_state_t *);
void nv_linux_remove_device_locked(nv_linux_state_t *);
NvBool nv_acpi_power_resource_method_present(struct pci_dev *);
#endif /* _NV_PROTO_H_ */

View File

@@ -0,0 +1,55 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2012-2013 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_REGISTER_MODULE_H_
#define _NV_REGISTER_MODULE_H_
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/poll.h>
#include "nvtypes.h"
typedef struct nvidia_module_s {
struct module *owner;
/* nvidia0, nvidia1 ..*/
const char *module_name;
/* module instance */
NvU32 instance;
/* file operations */
int (*open)(struct inode *, struct file *filp);
int (*close)(struct inode *, struct file *filp);
int (*mmap)(struct file *filp, struct vm_area_struct *vma);
int (*ioctl)(struct inode *, struct file * file, unsigned int cmd, unsigned long arg);
unsigned int (*poll)(struct file * file, poll_table *wait);
} nvidia_module_t;
int nvidia_register_module(nvidia_module_t *);
int nvidia_unregister_module(nvidia_module_t *);
#endif

View File

@@ -0,0 +1,82 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2019 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_RETPOLINE_H_
#define _NV_RETPOLINE_H_
#include "cpuopsys.h"
#if (NV_SPECTRE_V2 == 0)
#define NV_RETPOLINE_THUNK NV_SPEC_THUNK
#else
#define NV_RETPOLINE_THUNK NV_NOSPEC_THUNK
#endif
#if defined(NVCPU_X86_64)
#define NV_SPEC_THUNK(REG) \
__asm__( \
".weak __x86_indirect_thunk_" #REG ";" \
".type __x86_indirect_thunk_" #REG ", @function;" \
"__x86_indirect_thunk_" #REG ":" \
" .cfi_startproc;" \
" jmp *%" #REG ";" \
" .cfi_endproc;" \
".size __x86_indirect_thunk_" #REG ", .-__x86_indirect_thunk_" #REG)
#define NV_NOSPEC_THUNK(REG) \
__asm__( \
".weak __x86_indirect_thunk_" #REG ";" \
".type __x86_indirect_thunk_" #REG ", @function;" \
"__x86_indirect_thunk_" #REG ":" \
" .cfi_startproc;" \
" call .Lnv_no_fence_" #REG ";" \
".Lnv_fence_" #REG ":" \
" pause;" \
" lfence;" \
" jmp .Lnv_fence_" #REG ";" \
".Lnv_no_fence_" #REG ":" \
" mov %" #REG ", (%rsp);" \
" ret;" \
" .cfi_endproc;" \
".size __x86_indirect_thunk_" #REG ", .-__x86_indirect_thunk_" #REG)
__asm__(".pushsection .text");
NV_RETPOLINE_THUNK(rax);
NV_RETPOLINE_THUNK(rbx);
NV_RETPOLINE_THUNK(rcx);
NV_RETPOLINE_THUNK(rdx);
NV_RETPOLINE_THUNK(rsi);
NV_RETPOLINE_THUNK(rdi);
NV_RETPOLINE_THUNK(rbp);
NV_RETPOLINE_THUNK(r8);
NV_RETPOLINE_THUNK(r9);
NV_RETPOLINE_THUNK(r10);
NV_RETPOLINE_THUNK(r11);
NV_RETPOLINE_THUNK(r12);
NV_RETPOLINE_THUNK(r13);
NV_RETPOLINE_THUNK(r14);
NV_RETPOLINE_THUNK(r15);
__asm__(".popsection");
#endif
#endif /* _NV_RETPOLINE_H_ */

View File

@@ -0,0 +1,251 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_TIME_H__
#define __NV_TIME_H__
#include "conftest.h"
#include <linux/sched.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/ktime.h>
#include <nvstatus.h>
#define NV_MAX_ISR_DELAY_US 20000
#define NV_MAX_ISR_DELAY_MS (NV_MAX_ISR_DELAY_US / 1000)
#define NV_NSECS_TO_JIFFIES(nsec) ((nsec) * HZ / 1000000000)
#if !defined(NV_TIMESPEC64_PRESENT)
struct timespec64 {
__s64 tv_sec;
long tv_nsec;
};
#endif
#if !defined(NV_KTIME_GET_RAW_TS64_PRESENT)
static inline void ktime_get_raw_ts64(struct timespec64 *ts64)
{
struct timespec ts;
getrawmonotonic(&ts);
ts64->tv_sec = ts.tv_sec;
ts64->tv_nsec = ts.tv_nsec;
}
#endif
#if !defined(NV_KTIME_GET_REAL_TS64_PRESENT)
static inline void ktime_get_real_ts64(struct timespec64 *ts64)
{
struct timeval tv;
do_gettimeofday(&tv);
ts64->tv_sec = tv.tv_sec;
ts64->tv_nsec = tv.tv_usec * (NvU64) NSEC_PER_USEC;
}
#endif
static NvBool nv_timer_less_than
(
const struct timespec64 *a,
const struct timespec64 *b
)
{
return (a->tv_sec == b->tv_sec) ? (a->tv_nsec < b->tv_nsec)
: (a->tv_sec < b->tv_sec);
}
#if !defined(NV_TIMESPEC64_PRESENT)
static inline struct timespec64 timespec64_add
(
const struct timespec64 a,
const struct timespec64 b
)
{
struct timespec64 result;
result.tv_sec = a.tv_sec + b.tv_sec;
result.tv_nsec = a.tv_nsec + b.tv_nsec;
while (result.tv_nsec >= NSEC_PER_SEC)
{
++result.tv_sec;
result.tv_nsec -= NSEC_PER_SEC;
}
return result;
}
static inline struct timespec64 timespec64_sub
(
const struct timespec64 a,
const struct timespec64 b
)
{
struct timespec64 result;
result.tv_sec = a.tv_sec - b.tv_sec;
result.tv_nsec = a.tv_nsec - b.tv_nsec;
while (result.tv_nsec < 0)
{
--(result.tv_sec);
result.tv_nsec += NSEC_PER_SEC;
}
return result;
}
static inline s64 timespec64_to_ns(struct timespec64 *ts)
{
return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
}
#endif
static inline NvU64 nv_ktime_get_raw_ns(void)
{
struct timespec64 ts;
ktime_get_raw_ts64(&ts);
return (NvU64)timespec64_to_ns(&ts);
}
// #define NV_CHECK_DELAY_ACCURACY 1
/*
* It is generally a bad idea to use udelay() to wait for more than
* a few milliseconds. Since the caller is most likely not aware of
* this, we use mdelay() for any full millisecond to be safe.
*/
static inline NV_STATUS nv_sleep_us(unsigned int us)
{
unsigned long mdelay_safe_msec;
unsigned long usec;
#ifdef NV_CHECK_DELAY_ACCURACY
struct timespec64 tm1, tm2, tm_diff;
ktime_get_raw_ts64(&tm1);
#endif
if (in_irq() && (us > NV_MAX_ISR_DELAY_US))
return NV_ERR_GENERIC;
mdelay_safe_msec = us / 1000;
if (mdelay_safe_msec)
mdelay(mdelay_safe_msec);
usec = us % 1000;
if (usec)
udelay(usec);
#ifdef NV_CHECK_DELAY_ACCURACY
ktime_get_raw_ts64(&tm2);
tm_diff = timespec64_sub(tm2, tm1);
pr_info("NVRM: delay of %d usec results in actual delay of 0x%llu nsec\n",
us, timespec64_to_ns(&tm_diff));
#endif
return NV_OK;
}
/*
* Sleep for specified milliseconds. Yields the CPU to scheduler.
*
* On Linux, a jiffie represents the time passed in between two timer
* interrupts. The number of jiffies per second (HZ) varies across the
* supported platforms. On i386, where HZ is 100, a timer interrupt is
* generated every 10ms. NV_MSECS_TO_JIFFIES should be accurate independent of
* the actual value of HZ; any partial jiffies will be 'floor'ed, the
* remainder will be accounted for with mdelay().
*/
static inline NV_STATUS nv_sleep_ms(unsigned int ms)
{
NvU64 ns;
unsigned long jiffies;
unsigned long mdelay_safe_msec;
struct timespec64 tm_end, tm_aux;
#ifdef NV_CHECK_DELAY_ACCURACY
struct timespec64 tm_start;
#endif
ktime_get_raw_ts64(&tm_aux);
#ifdef NV_CHECK_DELAY_ACCURACY
tm_start = tm_aux;
#endif
if (in_irq() && (ms > NV_MAX_ISR_DELAY_MS))
{
return NV_ERR_GENERIC;
}
if (irqs_disabled() || in_interrupt() || in_atomic())
{
mdelay(ms);
return NV_OK;
}
ns = ms * (NvU64) NSEC_PER_MSEC;
tm_end.tv_nsec = ns;
tm_end.tv_sec = 0;
tm_end = timespec64_add(tm_aux, tm_end);
/* do we have a full jiffie to wait? */
jiffies = NV_NSECS_TO_JIFFIES(ns);
if (jiffies)
{
//
// If we have at least one full jiffy to wait, give up
// up the CPU; since we may be rescheduled before
// the requested timeout has expired, loop until less
// than a jiffie of the desired delay remains.
//
set_current_state(TASK_INTERRUPTIBLE);
do
{
schedule_timeout(jiffies);
ktime_get_raw_ts64(&tm_aux);
if (nv_timer_less_than(&tm_aux, &tm_end))
{
tm_aux = timespec64_sub(tm_end, tm_aux);
ns = (NvU64) timespec64_to_ns(&tm_aux);
}
else
ns = 0;
} while ((jiffies = NV_NSECS_TO_JIFFIES(ns)) != 0);
}
if (ns > (NvU64) NSEC_PER_MSEC)
{
mdelay_safe_msec = ns / (NvU64) NSEC_PER_MSEC;
mdelay(mdelay_safe_msec);
ns %= (NvU64) NSEC_PER_MSEC;
}
if (ns)
{
ndelay(ns);
}
#ifdef NV_CHECK_DELAY_ACCURACY
ktime_get_raw_ts64(&tm_aux);
tm_aux = timespec64_sub(tm_aux, tm_start);
pr_info("NVRM: delay of %d msec results in actual delay of %lld.%09ld sec\n",
ms, tm_aux.tv_sec, tm_aux.tv_nsec);
#endif
return NV_OK;
}
#endif // __NV_TIME_H__

View File

@@ -0,0 +1,66 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NV_TIMER_H__
#define __NV_TIMER_H__
#include <linux/timer.h>
#include <linux/kernel.h> // For container_of
#include "conftest.h"
struct nv_timer
{
struct timer_list kernel_timer;
void (*nv_timer_callback)(struct nv_timer *nv_timer);
};
static inline void nv_timer_callback_typed_data(struct timer_list *timer)
{
struct nv_timer *nv_timer =
container_of(timer, struct nv_timer, kernel_timer);
nv_timer->nv_timer_callback(nv_timer);
}
static inline void nv_timer_callback_anon_data(unsigned long arg)
{
struct nv_timer *nv_timer = (struct nv_timer *)arg;
nv_timer->nv_timer_callback(nv_timer);
}
static inline void nv_timer_setup(struct nv_timer *nv_timer,
void (*callback)(struct nv_timer *nv_timer))
{
nv_timer->nv_timer_callback = callback;
#if defined(NV_TIMER_SETUP_PRESENT)
timer_setup(&nv_timer->kernel_timer, nv_timer_callback_typed_data, 0);
#else
init_timer(&nv_timer->kernel_timer);
nv_timer->kernel_timer.function = nv_timer_callback_anon_data;
nv_timer->kernel_timer.data = (unsigned long)nv_timer;
#endif
}
#endif // __NV_TIMER_H__

1081
kernel-open/common/inc/nv.h Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,44 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_CPU_UUID_H_
#define _NV_CPU_UUID_H_
#define NV_UUID_LEN 16
typedef struct nv_uuid
{
NvU8 uuid[NV_UUID_LEN];
} NvUuid;
#define NV_UUID_HI(pUuid) (*((NvU64*)((pUuid)->uuid + (NV_UUID_LEN >> 1))))
#define NV_UUID_LO(pUuid) (*((NvU64*)((pUuid)->uuid + 0)))
typedef NvUuid NvSystemUuid;
typedef NvUuid NvProcessorUuid;
extern const NvProcessorUuid NV_PROCESSOR_UUID_CPU_DEFAULT;
#endif // _NV_CPU_UUID_H_

View File

@@ -0,0 +1,34 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NV_FIRMWARE_TYPES_H
#define NV_FIRMWARE_TYPES_H
typedef enum {
NV_FIRMWARE_MODE_DISABLED = 0,
NV_FIRMWARE_MODE_ENABLED = 1,
NV_FIRMWARE_MODE_DEFAULT = 2,
NV_FIRMWARE_MODE_INVALID = 0xFF
} NvFirmwareMode;
#endif // NV_FIRMWARE_TYPES_H

View File

@@ -0,0 +1,227 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2018 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/*
* NVIDIA GPZ vulnerability mitigation definitions.
*/
/*
* There are two copies of this file for legacy reasons:
*
* P4: <$NV_SOURCE/>drivers/common/inc/nv_speculation_barrier.h
* Git: <tegra/core/>include/nv_speculation_barrier.h
*
* Both files need to be kept in sync if any changes are required.
*/
#ifndef _NV_SPECULATION_BARRIER_H_
#define _NV_SPECULATION_BARRIER_H_
#define NV_SPECULATION_BARRIER_VERSION 2
/*
* GNU-C/MSC/clang - x86/x86_64 : x86_64, __i386, __i386__
* GNU-C - THUMB mode : __GNUC__, __thumb__
* GNU-C - ARM modes : __GNUC__, __arm__, __aarch64__
* armclang - THUMB mode : __ARMCC_VERSION, __thumb__
* armclang - ARM modes : __ARMCC_VERSION, __arm__, __aarch64__
* GHS - THUMB mode : __ghs__, __THUMB__
* GHS - ARM modes : __ghs__, __ARM__, __ARM64__
*/
#if defined(_M_IX86) || defined(__i386__) || defined(__i386) \
|| defined(__x86_64) || defined(AMD64) || defined(_M_AMD64)
/* All x86 */
#define NV_SPECULATION_BARRIER_x86
#elif defined(macintosh) || defined(__APPLE__) \
|| defined(__powerpc) || defined(__powerpc__) || defined(__powerpc64__) \
|| defined(__POWERPC__) || defined(__ppc) || defined(__ppc__) \
|| defined(__ppc64__) || defined(__PPC__) \
|| defined(__PPC64__) || defined(_ARCH_PPC) || defined(_ARCH_PPC64)
/* All PowerPC */
#define NV_SPECULATION_BARRIER_PPC
#elif (defined(__GNUC__) && defined(__thumb__)) \
|| (defined(__ARMCC_VERSION) && defined(__thumb__)) \
|| (defined(__ghs__) && defined(__THUMB__))
/* ARM-thumb mode(<=ARMv7)/T32 (ARMv8) */
#define NV_SPECULATION_BARRIER_ARM_COMMON
#define NV_SPEC_BARRIER_CSDB ".inst.w 0xf3af8014\n"
#elif (defined(__GNUC__) && defined(__arm__)) \
|| (defined(__ARMCC_VERSION) && defined(__arm__)) \
|| (defined(__ghs__) && defined(__ARM__))
/* aarch32(ARMv8) / arm(<=ARMv7) mode */
#define NV_SPECULATION_BARRIER_ARM_COMMON
#define NV_SPEC_BARRIER_CSDB ".inst 0xe320f014\n"
#elif (defined(__GNUC__) && defined(__aarch64__)) \
|| (defined(__ARMCC_VERSION) && defined(__aarch64__)) \
|| (defined(__ghs__) && defined(__ARM64__))
/* aarch64(ARMv8) mode */
#define NV_SPECULATION_BARRIER_ARM_COMMON
#define NV_SPEC_BARRIER_CSDB "HINT #20\n"
#elif (defined(_MSC_VER) && ( defined(_M_ARM64) || defined(_M_ARM)) )
/* Not currently implemented for MSVC/ARM64. See bug 3366890. */
# define nv_speculation_barrier()
# define speculation_barrier() nv_speculation_barrier()
#elif defined(NVCPU_NVRISCV64) && NVOS_IS_LIBOS
# define nv_speculation_barrier()
#else
#error "Unknown compiler/chip family"
#endif
/*
* nv_speculation_barrier -- General-purpose speculation barrier
*
* This approach provides full protection against variant-1 vulnerability.
* However, the recommended approach is detailed below (See:
* nv_array_index_no_speculate)
*
* Semantics:
* Any memory read that is sequenced after a nv_speculation_barrier(),
* and contained directly within the scope of nv_speculation_barrier() or
* directly within a nested scope, will not speculatively execute until all
* conditions for entering that scope have been architecturally resolved.
*
* Example:
* if (untrusted_index_from_user < bound) {
* ...
* nv_speculation_barrier();
* ...
* x = array1[untrusted_index_from_user];
* bit = x & 1;
* y = array2[0x100 * bit];
* }
*/
#if defined(NV_SPECULATION_BARRIER_x86)
// Delete after all references are changed to nv_speculation_barrier
#define speculation_barrier() nv_speculation_barrier()
static inline void nv_speculation_barrier(void)
{
#if defined(_MSC_VER) && !defined(__clang__)
_mm_lfence();
#endif
#if defined(__GNUC__) || defined(__clang__)
__asm__ __volatile__ ("lfence" : : : "memory");
#endif
}
#elif defined(NV_SPECULATION_BARRIER_PPC)
static inline void nv_speculation_barrier(void)
{
asm volatile("ori 31,31,0");
}
#elif defined(NV_SPECULATION_BARRIER_ARM_COMMON)
/* Note: Cortex-A9 GNU-assembler seems to complain about DSB SY */
#define nv_speculation_barrier() \
asm volatile \
( \
"DSB sy\n" \
"ISB\n" \
: : : "memory" \
)
#endif
/*
* nv_array_index_no_speculate -- Recommended variant-1 mitigation approach
*
* The array-index-no-speculate approach "de-speculates" an array index that
* has already been bounds-checked.
*
* This approach is preferred over nv_speculation_barrier due to the following
* reasons:
* - It is just as effective as the general-purpose speculation barrier.
* - It clearly identifies what array index is being de-speculated and is thus
* self-commenting, whereas the general-purpose speculation barrier requires
* an explanation of what array index is being de-speculated.
* - It performs substantially better than the general-purpose speculation
* barrier on ARM Cortex-A cores (the difference is expected to be tens of
* cycles per invocation). Within tight loops, this difference may become
* noticeable.
*
* Semantics:
* Provided count is non-zero and the caller has already validated or otherwise
* established that index < count, any speculative use of the return value will
* use a speculative value that is less than count.
*
* Example:
* if (untrusted_index_from_user < bound) {
* untrusted_index_from_user = nv_array_index_no_speculate(
* untrusted_index_from_user, bound);
* ...
* x = array1[untrusted_index_from_user];
* ...
* }
*
* The use of nv_array_index_no_speculate() in the above example ensures that
* subsequent uses of untrusted_index_from_user will not execute speculatively
* (they will wait for the bounds check to complete).
*/
static inline unsigned long nv_array_index_no_speculate(unsigned long index,
unsigned long count)
{
#if defined(NV_SPECULATION_BARRIER_x86) && (defined(__GNUC__) || defined(__clang__))
unsigned long mask;
__asm__ __volatile__
(
"CMP %2, %1 \n"
"SBB %0, %0 \n"
: "=r"(mask) : "r"(index), "r"(count) : "cc"
);
return (index & mask);
#elif defined(NV_SPECULATION_BARRIER_ARM_COMMON)
unsigned long mask;
asm volatile
(
"CMP %[ind], %[cnt] \n"
"SBC %[res], %[cnt], %[cnt] \n"
NV_SPEC_BARRIER_CSDB
: [res] "=r" (mask) : [ind] "r" (index), [cnt] "r" (count): "cc"
);
return (index & mask);
/* Fallback to generic speculation barrier for unsupported platforms */
#else
nv_speculation_barrier();
return index;
#endif
}
#endif //_NV_SPECULATION_BARRIER_H_

View File

@@ -0,0 +1,39 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_STDARG_H_
#define _NV_STDARG_H_
#if defined(NV_KERNEL_INTERFACE_LAYER) && defined(NV_LINUX)
#include "conftest.h"
#if defined(NV_LINUX_STDARG_H_PRESENT)
#include <linux/stdarg.h>
#else
#include <stdarg.h>
#endif
#else
#include <stdarg.h>
#endif
#endif // _NV_STDARG_H_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,970 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2014-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
//
// This file provides common types for both UVM driver and RM's UVM interface.
//
#ifndef _NV_UVM_TYPES_H_
#define _NV_UVM_TYPES_H_
#include "nvtypes.h"
#include "nvstatus.h"
#include "nvgputypes.h"
#include "nvCpuUuid.h"
//
// Default Page Size if left "0" because in RM BIG page size is default & there
// are multiple BIG page sizes in RM. These defines are used as flags to "0"
// should be OK when user is not sure which pagesize allocation it wants
//
#define UVM_PAGE_SIZE_DEFAULT 0x0
#define UVM_PAGE_SIZE_4K 0x1000
#define UVM_PAGE_SIZE_64K 0x10000
#define UVM_PAGE_SIZE_128K 0x20000
#define UVM_PAGE_SIZE_2M 0x200000
#define UVM_PAGE_SIZE_512M 0x20000000
//
// When modifying flags, make sure they are compatible with the mirrored
// PMA_* flags in phys_mem_allocator.h.
//
// Input flags
#define UVM_PMA_ALLOCATE_DONT_EVICT NVBIT(0)
#define UVM_PMA_ALLOCATE_PINNED NVBIT(1)
#define UVM_PMA_ALLOCATE_SPECIFY_MINIMUM_SPEED NVBIT(2)
#define UVM_PMA_ALLOCATE_SPECIFY_ADDRESS_RANGE NVBIT(3)
#define UVM_PMA_ALLOCATE_SPECIFY_REGION_ID NVBIT(4)
#define UVM_PMA_ALLOCATE_PREFER_SLOWEST NVBIT(5)
#define UVM_PMA_ALLOCATE_CONTIGUOUS NVBIT(6)
#define UVM_PMA_ALLOCATE_PERSISTENT NVBIT(7)
#define UVM_PMA_ALLOCATE_PROTECTED_REGION NVBIT(8)
#define UVM_PMA_ALLOCATE_FORCE_ALIGNMENT NVBIT(9)
#define UVM_PMA_ALLOCATE_NO_ZERO NVBIT(10)
#define UVM_PMA_ALLOCATE_TURN_BLACKLIST_OFF NVBIT(11)
#define UVM_PMA_ALLOCATE_ALLOW_PARTIAL NVBIT(12)
// Output flags
#define UVM_PMA_ALLOCATE_RESULT_IS_ZERO NVBIT(0)
// Input flags to pmaFree
#define UVM_PMA_FREE_IS_ZERO NVBIT(0)
//
// Indicate that the PMA operation is being done from one of the PMA eviction
// callbacks.
//
// Notably this flag is currently used only by the UVM/RM interface and not
// mirrored in PMA.
//
#define UVM_PMA_CALLED_FROM_PMA_EVICTION 16384
#define UVM_UUID_LEN 16
#define UVM_SW_OBJ_SUBCHANNEL 5
typedef unsigned long long UvmGpuPointer;
//
// The following typedefs serve to explain the resources they point to.
// The actual resources remain RM internal and not exposed.
//
typedef struct uvmGpuSession_tag *uvmGpuSessionHandle; // gpuSessionHandle
typedef struct uvmGpuDevice_tag *uvmGpuDeviceHandle; // gpuDeviceHandle
typedef struct uvmGpuAddressSpace_tag *uvmGpuAddressSpaceHandle; // gpuAddressSpaceHandle
typedef struct uvmGpuChannel_tag *uvmGpuChannelHandle; // gpuChannelHandle
typedef struct uvmGpuCopyEngine_tag *uvmGpuCopyEngineHandle; // gpuObjectHandle
typedef struct UvmGpuMemoryInfo_tag
{
// Out: Memory layout.
NvU32 kind;
// Out: Set to TRUE, if the allocation is in sysmem.
NvBool sysmem;
// Out: Set to TRUE, if the allocation is a constructed
// under a Device or Subdevice.
// All permutations of sysmem and deviceDescendant are valid.
// !sysmem && !deviceDescendant implies a fabric allocation.
NvBool deviceDescendant;
// Out: Page size associated with the phys alloc.
NvU32 pageSize;
// Out: Set to TRUE, if the allocation is contiguous.
NvBool contig;
// Out: Starting Addr if the allocation is contiguous.
// This is only valid if contig is NV_TRUE.
NvU64 physAddr;
// Out: Total size of the allocation.
NvU64 size;
// Out: Uuid of the GPU to which the allocation belongs.
// This is only valid if deviceDescendant is NV_TRUE.
// Note: If the allocation is owned by a device in
// an SLI group and the allocation is broadcast
// across the SLI group, this UUID will be any one
// of the subdevices in the SLI group.
NvProcessorUuid uuid;
} UvmGpuMemoryInfo;
// Some resources must share the same virtual mappings across channels. A mapped
// resource must be shared by a channel iff:
//
// 1) The channel belongs to a TSG (UvmGpuChannelInstanceInfo::bTsgChannel is
// NV_TRUE).
//
// 2) The channel is in the same TSG as all other channels sharing that mapping
// (UvmGpuChannelInstanceInfo::tsgId matches among channels).
//
// 3) The channel is in the same GPU address space as the other channels
// sharing that mapping.
//
// 4) The resource handle(s) match those of the shared mapping
// (UvmGpuChannelResourceInfo::resourceDescriptor and
// UvmGpuChannelResourceInfo::resourceId).
typedef struct UvmGpuChannelResourceInfo_tag
{
// Out: Ptr to the RM memDesc of the channel resource.
NvP64 resourceDescriptor;
// Out: RM ID of the channel resource.
NvU32 resourceId;
// Out: Alignment needed for the resource allocation.
NvU64 alignment;
// Out: Info about the resource allocation.
UvmGpuMemoryInfo resourceInfo;
} UvmGpuChannelResourceInfo;
typedef struct UvmGpuPagingChannelInfo_tag
{
// Pointer to a shadown buffer mirroring the contents of the error notifier
// for the paging channel
NvNotification *shadowErrorNotifier;
} UvmGpuPagingChannelInfo;
typedef enum
{
UVM_GPU_CHANNEL_ENGINE_TYPE_GR = 1,
UVM_GPU_CHANNEL_ENGINE_TYPE_CE = 2,
UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2 = 3,
} UVM_GPU_CHANNEL_ENGINE_TYPE;
#define UVM_GPU_CHANNEL_MAX_RESOURCES 13
typedef struct UvmGpuChannelInstanceInfo_tag
{
// Out: Starting address of the channel instance.
NvU64 base;
// Out: Set to NV_TRUE, if the instance is in sysmem.
// Set to NV_FALSE, if the instance is in vidmem.
NvBool sysmem;
// Out: Hardware runlist ID.
NvU32 runlistId;
// Out: Hardware channel ID.
NvU32 chId;
// Out: NV_TRUE if the channel belongs to a subcontext or NV_FALSE if it
// belongs to a regular context.
NvBool bInSubctx;
// Out: ID of the subcontext to which the channel belongs.
NvU32 subctxId;
// Out: Whether the channel belongs to a TSG or not
NvBool bTsgChannel;
// Out: ID of the TSG to which the channel belongs
NvU32 tsgId;
// Out: Maximum number of subcontexts in the TSG to which the channel belongs
NvU32 tsgMaxSubctxCount;
// Out: Info of channel resources associated with the channel.
UvmGpuChannelResourceInfo resourceInfo[UVM_GPU_CHANNEL_MAX_RESOURCES];
// Out: Number of valid entries in resourceInfo array.
NvU32 resourceCount;
// Out: Type of the engine the channel is bound to
NvU32 channelEngineType;
// Out: Channel handle required to ring the doorbell
NvU32 workSubmissionToken;
// Out: Address of the doorbell
volatile NvU32 *workSubmissionOffset;
// Out: Channel handle to be used in the CLEAR_FAULTED method
NvU32 clearFaultedToken;
// Out: Address of the NV_CHRAM_CHANNEL register required to clear the
// ENG_FAULTED/PBDMA_FAULTED bits after servicing non-replayable faults on
// Ampere+ GPUs
volatile NvU32 *pChramChannelRegister;
// Out: SMC engine id to which the GR channel is bound, or zero if the GPU
// does not support SMC or it is a CE channel
NvU32 smcEngineId;
// Out: Start of the VEID range assigned to the SMC engine the GR channel
// is bound to, or zero if the GPU does not support SMC or it is a CE
// channel
NvU32 smcEngineVeIdOffset;
} UvmGpuChannelInstanceInfo;
typedef struct UvmGpuChannelResourceBindParams_tag
{
// In: RM ID of the channel resource.
NvU32 resourceId;
// In: Starting VA at which the channel resource is mapped.
NvU64 resourceVa;
} UvmGpuChannelResourceBindParams;
typedef struct UvmGpuChannelInfo_tag
{
volatile unsigned *gpGet;
volatile unsigned *gpPut;
UvmGpuPointer *gpFifoEntries;
unsigned numGpFifoEntries;
unsigned channelClassNum;
// The errorNotifier is filled out when the channel hits an RC error.
NvNotification *errorNotifier;
NvU32 hwRunlistId;
NvU32 hwChannelId;
volatile unsigned *dummyBar1Mapping;
// These values are filled by nvUvmInterfaceCopyEngineAlloc. The work
// submission token requires the channel to be bound to a runlist and that
// happens after CE allocation.
volatile NvU32 *workSubmissionOffset;
// To be deprecated. See pWorkSubmissionToken below.
NvU32 workSubmissionToken;
//
// This is the memory location where the most recently updated work
// submission token for this channel will be written to. After submitting
// new work and updating GP_PUT with the appropriate fence, the token must
// be read from this location before writing it to the workSubmissionOffset
// to kick off the new work.
//
volatile NvU32 *pWorkSubmissionToken;
} UvmGpuChannelInfo;
typedef enum
{
// This value must be passed by Pascal and pre-Pascal GPUs for those
// allocations for which a specific location cannot be enforced.
UVM_BUFFER_LOCATION_DEFAULT = 0,
UVM_BUFFER_LOCATION_SYS = 1,
UVM_BUFFER_LOCATION_VID = 2,
} UVM_BUFFER_LOCATION;
typedef struct UvmGpuChannelAllocParams_tag
{
NvU32 numGpFifoEntries;
// The next two fields store UVM_BUFFER_LOCATION values
NvU32 gpFifoLoc;
NvU32 gpPutLoc;
// Index of the engine the channel will be bound to
// ignored if engineType is anything other than UVM_GPU_CHANNEL_ENGINE_TYPE_CE
NvU32 engineIndex;
// interpreted as UVM_GPU_CHANNEL_ENGINE_TYPE
NvU32 engineType;
} UvmGpuChannelAllocParams;
typedef struct UvmGpuPagingChannelAllocParams_tag
{
// Index of the LCE engine the channel will be bound to, a zero-based offset
// from NV2080_ENGINE_TYPE_COPY0.
NvU32 engineIndex;
} UvmGpuPagingChannelAllocParams;
// The max number of Copy Engines supported by a GPU.
// The gpu ops build has a static assert that this is the correct number.
#define UVM_COPY_ENGINE_COUNT_MAX 10
typedef struct
{
// True if the CE is supported at all
NvBool supported:1;
// True if the CE is synchronous with GR
NvBool grce:1;
// True if the CE shares physical CEs with any other CE
//
// The value returned by RM for this field may change when a GPU is
// registered with RM for the first time, so UVM needs to query it
// again each time a GPU is registered.
NvBool shared:1;
// True if the CE can give enhanced performance for SYSMEM reads over other CEs
NvBool sysmemRead:1;
// True if the CE can give enhanced performance for SYSMEM writes over other CEs
NvBool sysmemWrite:1;
// True if the CE can be used for SYSMEM transactions
NvBool sysmem:1;
// True if the CE can be used for P2P transactions using NVLINK
NvBool nvlinkP2p:1;
// True if the CE can be used for P2P transactions
NvBool p2p:1;
// Mask of physical CEs assigned to this LCE
//
// The value returned by RM for this field may change when a GPU is
// registered with RM for the first time, so UVM needs to query it
// again each time a GPU is registered.
NvU32 cePceMask;
} UvmGpuCopyEngineCaps;
typedef struct UvmGpuCopyEnginesCaps_tag
{
// Supported CEs may not be contiguous
UvmGpuCopyEngineCaps copyEngineCaps[UVM_COPY_ENGINE_COUNT_MAX];
} UvmGpuCopyEnginesCaps;
typedef enum
{
UVM_LINK_TYPE_NONE,
UVM_LINK_TYPE_PCIE,
UVM_LINK_TYPE_NVLINK_1,
UVM_LINK_TYPE_NVLINK_2,
UVM_LINK_TYPE_NVLINK_3,
} UVM_LINK_TYPE;
typedef struct UvmGpuCaps_tag
{
NvU32 sysmemLink; // UVM_LINK_TYPE
NvU32 sysmemLinkRateMBps; // See UvmGpuP2PCapsParams::totalLinkLineRateMBps
NvBool numaEnabled;
NvU32 numaNodeId;
// On ATS systems, GPUs connected to different CPU sockets can have peer
// traffic. They are called indirect peers. However, indirect peers are
// mapped using sysmem aperture. In order to disambiguate the location of a
// specific memory address, each GPU maps its memory to a different window
// in the System Physical Address (SPA) space. The following fields contain
// the base + size of such window for the GPU. systemMemoryWindowSize
// different than 0 indicates that the window is valid.
//
// - If the window is valid, then we can map GPU memory to the CPU as
// cache-coherent by adding the GPU address to the window start.
// - If numaEnabled is NV_TRUE, then we can also convert the system
// addresses of allocated GPU memory to struct pages.
//
// TODO: Bug 1986868: fix window start computation for SIMICS
NvU64 systemMemoryWindowStart;
NvU64 systemMemoryWindowSize;
// This tells if the GPU is connected to NVSwitch. On systems with NVSwitch
// all GPUs are connected to it. If connectedToSwitch is NV_TRUE,
// nvswitchMemoryWindowStart tells the base address for the GPU in the
// NVSwitch address space. It is used when creating PTEs of memory mappings
// to NVSwitch peers.
NvBool connectedToSwitch;
NvU64 nvswitchMemoryWindowStart;
} UvmGpuCaps;
typedef struct UvmGpuAddressSpaceInfo_tag
{
NvU32 bigPageSize;
NvBool atsEnabled;
// Mapped registers that contain the current GPU time
volatile NvU32 *time0Offset;
volatile NvU32 *time1Offset;
// Maximum number of subcontexts supported under this GPU address space
NvU32 maxSubctxCount;
NvBool smcEnabled;
NvU32 smcSwizzId;
NvU32 smcGpcCount;
} UvmGpuAddressSpaceInfo;
typedef struct UvmGpuAllocInfo_tag
{
NvU64 rangeBegin; // Allocation will be made between
NvU64 rangeEnd; // rangeBegin & rangeEnd both included
NvU64 gpuPhysOffset; // Returns gpuPhysOffset if contiguous requested
NvU32 pageSize; // default is RM big page size - 64K or 128 K" else use 4K or 2M
NvU64 alignment; // Alignment of allocation
NvBool bContiguousPhysAlloc; // Flag to request contiguous physical allocation
NvBool bMemGrowsDown; // Causes RM to reserve physical heap from top of FB
NvBool bPersistentVidmem; // Causes RM to allocate persistent video memory
NvHandle hPhysHandle; // Handle for phys allocation either provided or retrieved
} UvmGpuAllocInfo;
typedef enum
{
UVM_VIRT_MODE_NONE = 0, // Baremetal or passthrough virtualization
UVM_VIRT_MODE_LEGACY = 1, // Virtualization without SRIOV support
UVM_VIRT_MODE_SRIOV_HEAVY = 2, // Virtualization with SRIOV Heavy configured
UVM_VIRT_MODE_SRIOV_STANDARD = 3, // Virtualization with SRIOV Standard configured
UVM_VIRT_MODE_COUNT = 4,
} UVM_VIRT_MODE;
// !!! The following enums (with UvmRm prefix) are defined and documented in
// mm/uvm/interface/uvm_types.h and must be mirrored. Please refer to that file
// for more details.
// UVM GPU mapping types
typedef enum
{
UvmRmGpuMappingTypeDefault = 0,
UvmRmGpuMappingTypeReadWriteAtomic = 1,
UvmRmGpuMappingTypeReadWrite = 2,
UvmRmGpuMappingTypeReadOnly = 3,
UvmRmGpuMappingTypeCount = 4
} UvmRmGpuMappingType;
// UVM GPU caching types
typedef enum
{
UvmRmGpuCachingTypeDefault = 0,
UvmRmGpuCachingTypeForceUncached = 1,
UvmRmGpuCachingTypeForceCached = 2,
UvmRmGpuCachingTypeCount = 3
} UvmRmGpuCachingType;
// UVM GPU format types
typedef enum {
UvmRmGpuFormatTypeDefault = 0,
UvmRmGpuFormatTypeBlockLinear = 1,
UvmRmGpuFormatTypeCount = 2
} UvmRmGpuFormatType;
// UVM GPU Element bits types
typedef enum {
UvmRmGpuFormatElementBitsDefault = 0,
UvmRmGpuFormatElementBits8 = 1,
UvmRmGpuFormatElementBits16 = 2,
// Cuda does not support 24-bit width
UvmRmGpuFormatElementBits32 = 4,
UvmRmGpuFormatElementBits64 = 5,
UvmRmGpuFormatElementBits128 = 6,
UvmRmGpuFormatElementBitsCount = 7
} UvmRmGpuFormatElementBits;
// UVM GPU Compression types
typedef enum {
UvmRmGpuCompressionTypeDefault = 0,
UvmRmGpuCompressionTypeEnabledNoPlc = 1,
UvmRmGpuCompressionTypeCount = 2
} UvmRmGpuCompressionType;
typedef struct UvmGpuExternalMappingInfo_tag
{
// In: GPU caching ability.
UvmRmGpuCachingType cachingType;
// In: Virtual permissions.
UvmRmGpuMappingType mappingType;
// In: RM virtual mapping memory format
UvmRmGpuFormatType formatType;
// In: RM virtual mapping element bits
UvmRmGpuFormatElementBits elementBits;
// In: RM virtual compression type
UvmRmGpuCompressionType compressionType;
// In: Size of the buffer to store PTEs (in bytes).
NvU64 pteBufferSize;
// In: Pointer to a buffer to store PTEs.
// Out: The interface will fill the buffer with PTEs
NvU64 *pteBuffer;
// Out: Number of PTEs filled in to the buffer.
NvU64 numWrittenPtes;
// Out: Number of PTEs remaining to be filled
// if the buffer is not sufficient to accommodate
// requested PTEs.
NvU64 numRemainingPtes;
// Out: PTE size (in bytes)
NvU32 pteSize;
} UvmGpuExternalMappingInfo;
typedef struct UvmGpuP2PCapsParams_tag
{
// Out: peerId[i] contains gpu[i]'s peer id of gpu[1 - i]. Only defined if
// the GPUs are direct peers.
NvU32 peerIds[2];
// Out: UVM_LINK_TYPE
NvU32 p2pLink;
// Out: optimalNvlinkWriteCEs[i] contains gpu[i]'s optimal CE for writing to
// gpu[1 - i]. The CE indexes are valid only if the GPUs are NVLink peers.
//
// The value returned by RM for this field may change when a GPU is
// registered with RM for the first time, so UVM needs to query it again
// each time a GPU is registered.
NvU32 optimalNvlinkWriteCEs[2];
// Out: Maximum unidirectional bandwidth between the peers in megabytes per
// second, not taking into account the protocols overhead. The reported
// bandwidth for indirect peers is zero.
NvU32 totalLinkLineRateMBps;
// Out: True if the peers have a indirect link to communicate. On P9
// systems, this is true if peers are connected to different NPUs that
// forward the requests between them.
NvU32 indirectAccess : 1;
} UvmGpuP2PCapsParams;
// Platform-wide information
typedef struct UvmPlatformInfo_tag
{
// Out: ATS (Address Translation Services) is supported
NvBool atsSupported;
} UvmPlatformInfo;
typedef struct UvmGpuClientInfo_tag
{
NvHandle hClient;
NvHandle hSmcPartRef;
} UvmGpuClientInfo;
#define UVM_GPU_NAME_LENGTH 0x40
typedef struct UvmGpuInfo_tag
{
// Printable gpu name
char name[UVM_GPU_NAME_LENGTH];
// Uuid of this gpu
NvProcessorUuid uuid;
// Gpu architecture; NV2080_CTRL_MC_ARCH_INFO_ARCHITECTURE_*
NvU32 gpuArch;
// Gpu implementation; NV2080_CTRL_MC_ARCH_INFO_IMPLEMENTATION_*
NvU32 gpuImplementation;
// Host (gpfifo) class; *_CHANNEL_GPFIFO_*, e.g. KEPLER_CHANNEL_GPFIFO_A
NvU32 hostClass;
// Copy engine (dma) class; *_DMA_COPY_*, e.g. KEPLER_DMA_COPY_A
NvU32 ceClass;
// Compute class; *_COMPUTE_*, e.g. KEPLER_COMPUTE_A
NvU32 computeClass;
// Set if GPU supports TCC Mode & is in TCC mode.
NvBool gpuInTcc;
// Number of subdevices in SLI group.
NvU32 subdeviceCount;
// Virtualization mode of this gpu.
NvU32 virtMode; // UVM_VIRT_MODE
// NV_TRUE if this is a simulated/emulated GPU. NV_FALSE, otherwise.
NvBool isSimulated;
// Number of GPCs
// If SMC is enabled, this is the currently configured number of GPCs for
// the given partition (also see the smcSwizzId field below).
NvU32 gpcCount;
// Maximum number of GPCs; NV_SCAL_LITTER_NUM_GPCS
// This number is independent of the partition configuration, and can be
// used to conservatively size GPU-global constructs.
NvU32 maxGpcCount;
// Number of TPCs
NvU32 tpcCount;
// Maximum number of TPCs per GPC
NvU32 maxTpcPerGpcCount;
// NV_TRUE if SMC is enabled on this GPU.
NvBool smcEnabled;
// SMC partition ID (unique per GPU); note: valid when first looked up in
// nvUvmInterfaceGetGpuInfo(), but not guaranteed to remain valid.
// nvUvmInterfaceDeviceCreate() re-verifies the swizzId and fails if it is
// no longer valid.
NvU32 smcSwizzId;
UvmGpuClientInfo smcUserClientInfo;
} UvmGpuInfo;
typedef struct UvmGpuFbInfo_tag
{
// Max physical address that can be allocated by UVM. This excludes internal
// RM regions that are not registered with PMA either.
NvU64 maxAllocatableAddress;
NvU32 heapSize; // RAM in KB available for user allocations
NvU32 reservedHeapSize; // RAM in KB reserved for internal RM allocation
NvBool bZeroFb; // Zero FB mode enabled.
} UvmGpuFbInfo;
typedef struct UvmGpuEccInfo_tag
{
unsigned eccMask;
unsigned eccOffset;
void *eccReadLocation;
NvBool *eccErrorNotifier;
NvBool bEccEnabled;
} UvmGpuEccInfo;
typedef struct UvmPmaAllocationOptions_tag
{
NvU32 flags;
NvU32 minimumSpeed; // valid if flags & UVM_PMA_ALLOCATE_SPECIFY_MININUM_SPEED
NvU64 physBegin, physEnd; // valid if flags & UVM_PMA_ALLOCATE_SPECIFY_ADDRESS_RANGE
NvU32 regionId; // valid if flags & UVM_PMA_ALLOCATE_SPECIFY_REGION_ID
NvU64 alignment; // valid if flags & UVM_PMA_ALLOCATE_FORCE_ALIGNMENT
NvLength numPagesAllocated; // valid if flags & UVM_PMA_ALLOCATE_ALLOW_PARTIAL
NvU32 resultFlags; // valid if the allocation function returns NV_OK
} UvmPmaAllocationOptions;
//
// Mirrored in PMA (PMA_STATS)
//
typedef struct UvmPmaStatistics_tag
{
volatile NvU64 numPages2m; // PMA-wide 2MB pages count across all regions
volatile NvU64 numFreePages64k; // PMA-wide free 64KB page count across all regions
volatile NvU64 numFreePages2m; // PMA-wide free 2MB pages count across all regions
} UvmPmaStatistics;
/*******************************************************************************
uvmEventSuspend
This function will be called by the GPU driver to signal to UVM that the
system is about to enter a sleep state. When it is called, the
following assumptions/guarantees are valid/made:
* User channels have been preempted and disabled
* UVM channels are still running normally and will continue to do
so until after this function returns control
* User threads are still running, but can no longer issue system
system calls to the GPU driver
* Until exit from this function, UVM is allowed to make full use of
the GPUs under its control, as well as of the GPU driver
Upon return from this function, UVM may not access GPUs under its control
until the GPU driver calls uvmEventResume(). It may still receive
calls to uvmEventIsrTopHalf() during this time, which it should return
NV_ERR_NO_INTR_PENDING from. It will not receive any other calls.
*/
typedef NV_STATUS (*uvmEventSuspend_t) (void);
/*******************************************************************************
uvmEventResume
This function will be called by the GPU driver to signal to UVM that the
system has exited a previously entered sleep state. When it is called,
the following assumptions/guarantees are valid/made:
* UVM is again allowed to make full use of the GPUs under its
control, as well as of the GPU driver
* UVM channels are running normally
* User channels are still preempted and disabled
* User threads are again running, but still cannot issue system
calls to the GPU driver, nor submit new work
Upon return from this function, UVM is expected to be fully functional.
*/
typedef NV_STATUS (*uvmEventResume_t) (void);
/*******************************************************************************
uvmEventStartDevice
This function will be called by the GPU driver once it has finished its
initialization to tell the UVM driver that this GPU has come up.
*/
typedef NV_STATUS (*uvmEventStartDevice_t) (const NvProcessorUuid *pGpuUuidStruct);
/*******************************************************************************
uvmEventStopDevice
This function will be called by the GPU driver to let UVM know that a GPU
is going down.
*/
typedef NV_STATUS (*uvmEventStopDevice_t) (const NvProcessorUuid *pGpuUuidStruct);
#if defined (_WIN32)
/*******************************************************************************
uvmEventWddmResetDuringTimeout
This function will be called by KMD in a TDR servicing path to unmap channel
resources and to destroy channels. This is a Windows specific event.
*/
typedef NV_STATUS (*uvmEventWddmResetDuringTimeout_t) (const NvProcessorUuid *pGpuUuidStruct);
/*******************************************************************************
uvmEventWddmRestartAfterTimeout
This function will be called by KMD in a TDR servicing path to map channel
resources and to create channels. This is a Windows specific event.
*/
typedef NV_STATUS (*uvmEventWddmRestartAfterTimeout_t) (const NvProcessorUuid *pGpuUuidStruct);
/*******************************************************************************
uvmEventServiceInterrupt
This function gets called from RM's intr service routine when an interrupt
to service a page fault is triggered.
*/
typedef NV_STATUS (*uvmEventServiceInterrupt_t) (void *pDeviceObject,
NvU32 deviceId, NvU32 subdeviceId);
#endif
/*******************************************************************************
uvmEventIsrTopHalf_t
This function will be called by the GPU driver to let UVM know
that an interrupt has occurred.
Returns:
NV_OK if the UVM driver handled the interrupt
NV_ERR_NO_INTR_PENDING if the interrupt is not for the UVM driver
*/
#if defined (__linux__)
typedef NV_STATUS (*uvmEventIsrTopHalf_t) (const NvProcessorUuid *pGpuUuidStruct);
#else
typedef void (*uvmEventIsrTopHalf_t) (void);
#endif
struct UvmOpsUvmEvents
{
uvmEventSuspend_t suspend;
uvmEventResume_t resume;
uvmEventStartDevice_t startDevice;
uvmEventStopDevice_t stopDevice;
uvmEventIsrTopHalf_t isrTopHalf;
#if defined (_WIN32)
uvmEventWddmResetDuringTimeout_t wddmResetDuringTimeout;
uvmEventWddmRestartAfterTimeout_t wddmRestartAfterTimeout;
uvmEventServiceInterrupt_t serviceInterrupt;
#endif
};
typedef struct UvmGpuFaultInfo_tag
{
struct
{
// Register mappings obtained from RM
volatile NvU32* pFaultBufferGet;
volatile NvU32* pFaultBufferPut;
// Note: this variable is deprecated since buffer overflow is not a separate
// register from future chips.
volatile NvU32* pFaultBufferInfo;
volatile NvU32* pPmcIntr;
volatile NvU32* pPmcIntrEnSet;
volatile NvU32* pPmcIntrEnClear;
volatile NvU32* pPrefetchCtrl;
NvU32 replayableFaultMask;
// fault buffer cpu mapping and size
void* bufferAddress;
NvU32 bufferSize;
} replayable;
struct
{
// Shadow buffer for non-replayable faults on cpu memory. Resman copies
// here the non-replayable faults that need to be handled by UVM
void* shadowBufferAddress;
// Execution context for the queue associated with the fault buffer
void* shadowBufferContext;
// Fault buffer size
NvU32 bufferSize;
// Preallocated stack for functions called from the UVM isr top half
void *isr_sp;
// Preallocated stack for functions called from the UVM isr bottom half
void *isr_bh_sp;
} nonReplayable;
NvHandle faultBufferHandle;
} UvmGpuFaultInfo;
typedef struct UvmGpuPagingChannel_tag
{
struct gpuDevice *device;
NvNotification *errorNotifier;
NvHandle channelHandle;
NvHandle errorNotifierHandle;
void *pushStreamSp;
} UvmGpuPagingChannel, *UvmGpuPagingChannelHandle;
typedef struct UvmGpuAccessCntrInfo_tag
{
// Register mappings obtained from RM
// pointer to the Get register for the access counter buffer
volatile NvU32* pAccessCntrBufferGet;
// pointer to the Put register for the access counter buffer
volatile NvU32* pAccessCntrBufferPut;
// pointer to the Full register for the access counter buffer
volatile NvU32* pAccessCntrBufferFull;
// pointer to the hub interrupt
volatile NvU32* pHubIntr;
// pointer to interrupt enable register
volatile NvU32* pHubIntrEnSet;
// pointer to interrupt disable register
volatile NvU32* pHubIntrEnClear;
// mask for the access counter buffer
NvU32 accessCounterMask;
// access counter buffer cpu mapping and size
void* bufferAddress;
NvU32 bufferSize;
NvHandle accessCntrBufferHandle;
// The Notification address in the access counter notification msg does not
// contain the correct upper bits 63-47 for GPA-based notifications. RM
// provides us with the correct offset to be added.
// See Bug 1803015
NvU64 baseDmaSysmemAddr;
} UvmGpuAccessCntrInfo;
typedef enum
{
UVM_ACCESS_COUNTER_GRANULARITY_64K = 1,
UVM_ACCESS_COUNTER_GRANULARITY_2M = 2,
UVM_ACCESS_COUNTER_GRANULARITY_16M = 3,
UVM_ACCESS_COUNTER_GRANULARITY_16G = 4,
} UVM_ACCESS_COUNTER_GRANULARITY;
typedef enum
{
UVM_ACCESS_COUNTER_USE_LIMIT_NONE = 1,
UVM_ACCESS_COUNTER_USE_LIMIT_QTR = 2,
UVM_ACCESS_COUNTER_USE_LIMIT_HALF = 3,
UVM_ACCESS_COUNTER_USE_LIMIT_FULL = 4,
} UVM_ACCESS_COUNTER_USE_LIMIT;
typedef struct UvmGpuAccessCntrConfig_tag
{
NvU32 mimcGranularity;
NvU32 momcGranularity;
NvU32 mimcUseLimit;
NvU32 momcUseLimit;
NvU32 threshold;
} UvmGpuAccessCntrConfig;
typedef UvmGpuChannelInfo gpuChannelInfo;
typedef UvmGpuChannelAllocParams gpuChannelAllocParams;
typedef UvmGpuCaps gpuCaps;
typedef UvmGpuCopyEngineCaps gpuCeCaps;
typedef UvmGpuCopyEnginesCaps gpuCesCaps;
typedef UvmGpuP2PCapsParams getP2PCapsParams;
typedef UvmGpuAddressSpaceInfo gpuAddressSpaceInfo;
typedef UvmGpuAllocInfo gpuAllocInfo;
typedef UvmGpuInfo gpuInfo;
typedef UvmGpuClientInfo gpuClientInfo;
typedef UvmGpuAccessCntrInfo gpuAccessCntrInfo;
typedef UvmGpuAccessCntrConfig gpuAccessCntrConfig;
typedef UvmGpuFaultInfo gpuFaultInfo;
typedef UvmGpuMemoryInfo gpuMemoryInfo;
typedef UvmGpuExternalMappingInfo gpuExternalMappingInfo;
typedef UvmGpuChannelResourceInfo gpuChannelResourceInfo;
typedef UvmGpuChannelInstanceInfo gpuChannelInstanceInfo;
typedef UvmGpuChannelResourceBindParams gpuChannelResourceBindParams;
typedef UvmGpuFbInfo gpuFbInfo;
typedef UvmGpuEccInfo gpuEccInfo;
typedef UvmGpuPagingChannel *gpuPagingChannelHandle;
typedef UvmGpuPagingChannelInfo gpuPagingChannelInfo;
typedef UvmGpuPagingChannelAllocParams gpuPagingChannelAllocParams;
typedef UvmPmaAllocationOptions gpuPmaAllocationOptions;
#endif // _NV_UVM_TYPES_H_

View File

@@ -0,0 +1,179 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1993-2006 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/***************************************************************************\
|* *|
|* NV GPU Types *|
|* *|
|* This header contains definitions describing NVIDIA's GPU hardware state. *|
|* *|
\***************************************************************************/
#ifndef NVGPUTYPES_INCLUDED
#define NVGPUTYPES_INCLUDED
#ifdef __cplusplus
extern "C" {
#endif
#include "nvtypes.h"
/***************************************************************************\
|* NvNotification *|
\***************************************************************************/
/***** NvNotification Structure *****/
/*
* NV objects return information about method completion to clients via an
* array of notification structures in main memory.
*
* The client sets the status field to NV???_NOTIFICATION_STATUS_IN_PROGRESS.
* NV fills in the NvNotification[] data structure in the following order:
* timeStamp, otherInfo32, otherInfo16, and then status.
*/
/* memory data structures */
typedef volatile struct NvNotificationRec {
struct { /* 0000- */
NvU32 nanoseconds[2]; /* nanoseconds since Jan. 1, 1970 0- 7*/
} timeStamp; /* -0007*/
NvV32 info32; /* info returned depends on method 0008-000b*/
NvV16 info16; /* info returned depends on method 000c-000d*/
NvV16 status; /* user sets bit 15, NV sets status 000e-000f*/
} NvNotification;
/***************************************************************************\
|* NvGpuSemaphore *|
\***************************************************************************/
/***** NvGpuSemaphore Structure *****/
/*
* NvGpuSemaphore objects are used by the GPU to synchronize multiple
* command-streams.
*
* Please refer to class documentation for details regarding the content of
* the data[] field.
*/
/* memory data structures */
typedef volatile struct NvGpuSemaphoreRec {
NvV32 data[2]; /* Payload/Report data 0000-0007*/
struct { /* 0008- */
NvV32 nanoseconds[2]; /* nanoseconds since Jan. 1, 1970 8- f*/
} timeStamp; /* -000f*/
} NvGpuSemaphore;
/***************************************************************************\
|* NvGetReport *|
\***************************************************************************/
/*
* NV objects, starting with Kelvin, return information such as pixel counts to
* the user via the NV*_GET_REPORT method.
*
* The client fills in the "zero" field to any nonzero value and waits until it
* becomes zero. NV fills in the timeStamp, value, and zero fields.
*/
typedef volatile struct NVGetReportRec {
struct { /* 0000- */
NvU32 nanoseconds[2]; /* nanoseconds since Jan. 1, 1970 0- 7*/
} timeStamp; /* -0007*/
NvU32 value; /* info returned depends on method 0008-000b*/
NvU32 zero; /* always written to zero 000c-000f*/
} NvGetReport;
/***************************************************************************\
|* NvRcNotification *|
\***************************************************************************/
/*
* NV robust channel notification information is reported to clients via
* standard NV01_EVENT objects bound to instance of the NV*_CHANNEL_DMA and
* NV*_CHANNEL_GPFIFO objects.
*/
typedef struct NvRcNotificationRec {
struct {
NvU32 nanoseconds[2]; /* nanoseconds since Jan. 1, 1970 0- 7*/
} timeStamp; /* -0007*/
NvU32 exceptLevel; /* exception level 000c-000f*/
NvU32 exceptType; /* exception type 0010-0013*/
} NvRcNotification;
/***************************************************************************\
|* NvSyncPointFence *|
\***************************************************************************/
/***** NvSyncPointFence Structure *****/
/*
* NvSyncPointFence objects represent a syncpoint event. The syncPointID
* identifies the syncpoint register and the value is the value that the
* register will contain right after the event occurs.
*
* If syncPointID contains NV_INVALID_SYNCPOINT_ID then this is an invalid
* event. This is often used to indicate an event in the past (i.e. no need to
* wait).
*
* For more info on syncpoints refer to Mobile channel and syncpoint
* documentation.
*/
typedef struct NvSyncPointFenceRec {
NvU32 syncPointID;
NvU32 value;
} NvSyncPointFence;
#define NV_INVALID_SYNCPOINT_ID ((NvU32)-1)
/***************************************************************************\
|* *|
|* 64 bit type definitions for use in interface structures. *|
|* *|
\***************************************************************************/
#if !defined(XAPIGEN) /* NvOffset is XAPIGEN builtin type, so skip typedef */
typedef NvU64 NvOffset; /* GPU address */
#endif
#define NvOffset_HI32(n) ((NvU32)(((NvU64)(n)) >> 32))
#define NvOffset_LO32(n) ((NvU32)((NvU64)(n)))
/*
* There are two types of GPU-UUIDs available:
*
* (1) a SHA-256 based 32 byte ID, formatted as a 64 character
* hexadecimal string as "GPU-%16x-%08x-%08x-%08x-%024x"; this is
* deprecated.
*
* (2) a SHA-1 based 16 byte ID, formatted as a 32 character
* hexadecimal string as "GPU-%08x-%04x-%04x-%04x-%012x" (the
* canonical format of a UUID); this is the default.
*/
#define NV_GPU_UUID_SHA1_LEN (16)
#define NV_GPU_UUID_SHA256_LEN (32)
#define NV_GPU_UUID_LEN NV_GPU_UUID_SHA1_LEN
#ifdef __cplusplus
};
#endif
#endif /* NVGPUTYPES_INCLUDED */

View File

@@ -0,0 +1,533 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2014-2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#if !defined(NVKMS_API_TYPES_H)
#define NVKMS_API_TYPES_H
#include <nvtypes.h>
#include <nvmisc.h>
#include <nvlimits.h>
#define NVKMS_MAX_SUBDEVICES NV_MAX_SUBDEVICES
#define NVKMS_LEFT 0
#define NVKMS_RIGHT 1
#define NVKMS_MAX_EYES 2
#define NVKMS_MAIN_LAYER 0
#define NVKMS_OVERLAY_LAYER 1
#define NVKMS_MAX_LAYERS_PER_HEAD 8
#define NVKMS_MAX_PLANES_PER_SURFACE 3
#define NVKMS_DP_ADDRESS_STRING_LENGTH 64
#define NVKMS_DEVICE_ID_TEGRA 0x0000ffff
typedef NvU32 NvKmsDeviceHandle;
typedef NvU32 NvKmsDispHandle;
typedef NvU32 NvKmsConnectorHandle;
typedef NvU32 NvKmsSurfaceHandle;
typedef NvU32 NvKmsFrameLockHandle;
typedef NvU32 NvKmsDeferredRequestFifoHandle;
typedef NvU32 NvKmsSwapGroupHandle;
typedef NvU32 NvKmsVblankSyncObjectHandle;
struct NvKmsSize {
NvU16 width;
NvU16 height;
};
struct NvKmsPoint {
NvU16 x;
NvU16 y;
};
struct NvKmsSignedPoint {
NvS16 x;
NvS16 y;
};
struct NvKmsRect {
NvU16 x;
NvU16 y;
NvU16 width;
NvU16 height;
};
/*
* A 3x3 row-major matrix.
*
* The elements are 32-bit single-precision IEEE floating point values. The
* floating point bit pattern should be stored in NvU32s to be passed into the
* kernel.
*/
struct NvKmsMatrix {
NvU32 m[3][3];
};
typedef enum {
NVKMS_CONNECTOR_TYPE_DP = 0,
NVKMS_CONNECTOR_TYPE_VGA = 1,
NVKMS_CONNECTOR_TYPE_DVI_I = 2,
NVKMS_CONNECTOR_TYPE_DVI_D = 3,
NVKMS_CONNECTOR_TYPE_ADC = 4,
NVKMS_CONNECTOR_TYPE_LVDS = 5,
NVKMS_CONNECTOR_TYPE_HDMI = 6,
NVKMS_CONNECTOR_TYPE_USBC = 7,
NVKMS_CONNECTOR_TYPE_DSI = 8,
NVKMS_CONNECTOR_TYPE_DP_SERIALIZER = 9,
NVKMS_CONNECTOR_TYPE_UNKNOWN = 10,
NVKMS_CONNECTOR_TYPE_MAX = NVKMS_CONNECTOR_TYPE_UNKNOWN,
} NvKmsConnectorType;
static inline
const char *NvKmsConnectorTypeString(const NvKmsConnectorType connectorType)
{
switch (connectorType) {
case NVKMS_CONNECTOR_TYPE_DP: return "DP";
case NVKMS_CONNECTOR_TYPE_VGA: return "VGA";
case NVKMS_CONNECTOR_TYPE_DVI_I: return "DVI-I";
case NVKMS_CONNECTOR_TYPE_DVI_D: return "DVI-D";
case NVKMS_CONNECTOR_TYPE_ADC: return "ADC";
case NVKMS_CONNECTOR_TYPE_LVDS: return "LVDS";
case NVKMS_CONNECTOR_TYPE_HDMI: return "HDMI";
case NVKMS_CONNECTOR_TYPE_USBC: return "USB-C";
case NVKMS_CONNECTOR_TYPE_DSI: return "DSI";
case NVKMS_CONNECTOR_TYPE_DP_SERIALIZER: return "DP-SERIALIZER";
default: break;
}
return "Unknown";
}
typedef enum {
NVKMS_CONNECTOR_SIGNAL_FORMAT_VGA = 0,
NVKMS_CONNECTOR_SIGNAL_FORMAT_LVDS = 1,
NVKMS_CONNECTOR_SIGNAL_FORMAT_TMDS = 2,
NVKMS_CONNECTOR_SIGNAL_FORMAT_DP = 3,
NVKMS_CONNECTOR_SIGNAL_FORMAT_DSI = 4,
NVKMS_CONNECTOR_SIGNAL_FORMAT_UNKNOWN = 5,
NVKMS_CONNECTOR_SIGNAL_FORMAT_MAX =
NVKMS_CONNECTOR_SIGNAL_FORMAT_UNKNOWN,
} NvKmsConnectorSignalFormat;
/*!
* Description of Notifiers and Semaphores (Non-isochronous (NISO) surfaces).
*
* When flipping, the client can optionally specify a notifier and/or
* a semaphore to use with the flip. The surfaces used for these
* should be registered with NVKMS to get an NvKmsSurfaceHandle.
*
* NvKmsNIsoSurface::offsetInWords indicates the starting location, in
* 32-bit words, within the surface where EVO should write the
* notifier or semaphore. Note that only the first 4096 bytes of a
* surface can be used by semaphores or notifiers; offsetInWords must
* allow for the semaphore or notifier to be written within the first
* 4096 bytes of the surface. I.e., this must be satisfied:
*
* ((offsetInWords * 4) + elementSizeInBytes) <= 4096
*
* Where elementSizeInBytes is:
*
* if NISO_FORMAT_FOUR_WORD*, elementSizeInBytes = 16
* if NISO_FORMAT_LEGACY,
* if overlay && notifier, elementSizeInBytes = 16
* else, elementSizeInBytes = 4
*
* Note that different GPUs support different semaphore and notifier formats.
* Check NvKmsAllocDeviceReply::validNIsoFormatMask to determine which are
* valid for the given device.
*
* Note also that FOUR_WORD and FOUR_WORD_NVDISPLAY are the same size, but
* FOUR_WORD uses a format compatible with display class 907[ce], and
* FOUR_WORD_NVDISPLAY uses a format compatible with c37e (actually defined by
* the NV_DISP_NOTIFIER definition in clc37d.h).
*/
enum NvKmsNIsoFormat {
NVKMS_NISO_FORMAT_LEGACY,
NVKMS_NISO_FORMAT_FOUR_WORD,
NVKMS_NISO_FORMAT_FOUR_WORD_NVDISPLAY,
};
enum NvKmsEventType {
NVKMS_EVENT_TYPE_DPY_CHANGED,
NVKMS_EVENT_TYPE_DYNAMIC_DPY_CONNECTED,
NVKMS_EVENT_TYPE_DYNAMIC_DPY_DISCONNECTED,
NVKMS_EVENT_TYPE_DPY_ATTRIBUTE_CHANGED,
NVKMS_EVENT_TYPE_FRAMELOCK_ATTRIBUTE_CHANGED,
NVKMS_EVENT_TYPE_FLIP_OCCURRED,
};
typedef enum {
NV_EVO_SCALER_1TAP = 0,
NV_EVO_SCALER_2TAPS = 1,
NV_EVO_SCALER_3TAPS = 2,
NV_EVO_SCALER_5TAPS = 3,
NV_EVO_SCALER_8TAPS = 4,
NV_EVO_SCALER_TAPS_MIN = NV_EVO_SCALER_1TAP,
NV_EVO_SCALER_TAPS_MAX = NV_EVO_SCALER_8TAPS,
} NVEvoScalerTaps;
/* This structure describes the scaling bounds for a given layer. */
struct NvKmsScalingUsageBounds {
/*
* Maximum vertical downscale factor (scaled by 1024)
*
* For example, if the downscale factor is 1.5, then maxVDownscaleFactor
* would be 1.5 x 1024 = 1536.
*/
NvU16 maxVDownscaleFactor;
/*
* Maximum horizontal downscale factor (scaled by 1024)
*
* See the example above for maxVDownscaleFactor.
*/
NvU16 maxHDownscaleFactor;
/* Maximum vertical taps allowed */
NVEvoScalerTaps vTaps;
/* Whether vertical upscaling is allowed */
NvBool vUpscalingAllowed;
};
struct NvKmsUsageBounds {
struct {
NvBool usable;
struct NvKmsScalingUsageBounds scaling;
NvU64 supportedSurfaceMemoryFormats NV_ALIGN_BYTES(8);
} layer[NVKMS_MAX_LAYERS_PER_HEAD];
};
/*
* A 3x4 row-major colorspace conversion matrix.
*
* The output color C' is the CSC matrix M times the column vector
* [ R, G, B, 1 ].
*
* Each entry in the matrix is a signed 2's-complement fixed-point number with
* 3 integer bits and 16 fractional bits.
*/
struct NvKmsCscMatrix {
NvS32 m[3][4];
};
#define NVKMS_IDENTITY_CSC_MATRIX \
(struct NvKmsCscMatrix){{ \
{ 0x10000, 0, 0, 0 }, \
{ 0, 0x10000, 0, 0 }, \
{ 0, 0, 0x10000, 0 } \
}}
/*!
* A color key match bit used in the blend equations and one can select the src
* or dst Color Key when blending. Assert key bit means match, de-assert key
* bit means nomatch.
*
* The src Color Key means using the key bit from the current layer, the dst
* Color Key means using key bit from the previous layer composition stage. The
* src or dst key bit will be inherited by blended pixel for the preparation of
* next blending, as dst Color Key.
*
* src: Forward the color key match bit from the current layer pixel to next layer
* composition stage.
*
* dst: Forward the color key match bit from the previous composition stage
* pixel to next layer composition stage.
*
* disable: Forward “1” to the next layer composition stage as the color key.
*/
enum NvKmsCompositionColorKeySelect {
NVKMS_COMPOSITION_COLOR_KEY_SELECT_DISABLE = 0,
NVKMS_COMPOSITION_COLOR_KEY_SELECT_SRC,
NVKMS_COMPOSITION_COLOR_KEY_SELECT_DST,
};
#define NVKMS_COMPOSITION_NUMBER_OF_COLOR_KEY_SELECTS 3
/*!
* Composition modes used for surfaces in general.
* The various types of composition are:
*
* Opaque: source pixels are opaque regardless of alpha,
* and will occlude the destination pixel.
*
* Alpha blending: aka opacity, which could be specified
* for a surface in its entirety, or on a per-pixel basis.
*
* Non-premultiplied: alpha value applies to source pixel,
* and also counter-weighs the destination pixel.
* Premultiplied: alpha already applied to source pixel,
* so it only counter-weighs the destination pixel.
*
* Color keying: use a color key structure to decide
* the criteria for matching and compositing.
* (See NVColorKey below.)
*/
enum NvKmsCompositionBlendingMode {
/*!
* Modes that use no other parameters.
*/
NVKMS_COMPOSITION_BLENDING_MODE_OPAQUE,
/*!
* Mode that ignores both per-pixel alpha provided
* by client and the surfaceAlpha, makes source pixel
* totally transparent.
*/
NVKMS_COMPOSITION_BLENDING_MODE_TRANSPARENT,
/*!
* Modes that use per-pixel alpha provided by client,
* and the surfaceAlpha must be set to 0.
*/
NVKMS_COMPOSITION_BLENDING_MODE_PREMULT_ALPHA,
NVKMS_COMPOSITION_BLENDING_MODE_NON_PREMULT_ALPHA,
/*!
* These use both the surface-wide and per-pixel alpha values.
* surfaceAlpha is treated as numerator ranging from 0 to 255
* of a fraction whose denominator is 255.
*/
NVKMS_COMPOSITION_BLENDING_MODE_PREMULT_SURFACE_ALPHA,
NVKMS_COMPOSITION_BLENDING_MODE_NON_PREMULT_SURFACE_ALPHA,
};
static inline NvBool
NvKmsIsCompositionModeUseAlpha(enum NvKmsCompositionBlendingMode mode)
{
return mode == NVKMS_COMPOSITION_BLENDING_MODE_PREMULT_ALPHA ||
mode == NVKMS_COMPOSITION_BLENDING_MODE_NON_PREMULT_ALPHA ||
mode == NVKMS_COMPOSITION_BLENDING_MODE_PREMULT_SURFACE_ALPHA ||
mode == NVKMS_COMPOSITION_BLENDING_MODE_NON_PREMULT_SURFACE_ALPHA;
}
/*!
* Abstract description of a color key.
*
* a, r, g, and b are component values in the same width as the framebuffer
* values being scanned out.
*
* match[ARGB] defines whether that component is considered when matching the
* color key -- TRUE means that the value of the corresponding component must
* match the given value for the given pixel to be considered a 'key match';
* FALSE means that the value of that component is not a key match criterion.
*/
typedef struct {
NvU16 a, r, g, b;
NvBool matchA, matchR, matchG, matchB;
} NVColorKey;
/*!
* Describes the composition parameters for the single layer.
*/
struct NvKmsCompositionParams {
enum NvKmsCompositionColorKeySelect colorKeySelect;
NVColorKey colorKey;
/*
* It is possible to assign different blending mode for match pixels and
* nomatch pixels. blendingMode[0] is used to blend a pixel with the color key
* match bit "0", and blendingMode[1] is used to blend a pixel with the color
* key match bit "1".
*
* But because of the hardware restrictions match and nomatch pixels can
* not use blending mode PREMULT_ALPHA, NON_PREMULT_ALPHA,
* PREMULT_SURFACE_ALPHA, and NON_PREMULT_SURFACE_ALPHA at once.
*/
enum NvKmsCompositionBlendingMode blendingMode[2];
NvU8 surfaceAlpha; /* Applies to all pixels of entire surface */
/*
* Defines the composition order. A smaller value moves the layer closer to
* the top (away from the background). No need to pick consecutive values,
* requirements are that the value should be different for each of the
* layers owned by the head and the value for the main layer should be
* the greatest one.
*
* Cursor always remains at the top of all other layers, this parameter
* has no effect on cursor. NVKMS assigns default depth to each of the
* supported layers, by default depth of the layer is calculated as
* (NVKMS_MAX_LAYERS_PER_HEAD - index of the layer). If depth is set to
* '0' then default depth value will get used.
*/
NvU8 depth;
};
/*!
* Describes the composition capabilities supported by the hardware for
* cursor or layer. It describes supported the color key selects and for each
* of the supported color key selects it describes supported blending modes
* for match and nomatch pixles.
*/
struct NvKmsCompositionCapabilities {
struct {
/*
* A bitmask of the supported blending modes for match and nomatch
* pixels. It should be the bitwise 'or' of one or more
* NVBIT(NVKMS_COMPOSITION_BLENDING_MODE_*) values.
*/
NvU32 supportedBlendModes[2];
} colorKeySelect[NVKMS_COMPOSITION_NUMBER_OF_COLOR_KEY_SELECTS];
/*
* A bitmask of the supported color key selects.
*
* It should be the bitwise 'or' of one or more
* NVBIT(NVKMS_COMPOSITION_COLOR_KEY_SELECT_*)
* values.
*/
NvU32 supportedColorKeySelects;
};
struct NvKmsLayerCapabilities {
/*!
* Whether Layer supports the window mode. If window mode is supported,
* then clients can set the layer's dimensions so that they're smaller than
* the viewport, and can also change the output position of the layer to a
* non-(0, 0) position.
*
* NOTE: Dimension changes are currently unsupported for the main layer,
* and output position changes for the main layer are currently only
* supported via IOCTL_SET_LAYER_POSITION but not via flips. Support for
* these is coming soon, via changes to flip code.
*/
NvBool supportsWindowMode :1;
/*!
* Whether layer supports HDR pipe.
*/
NvBool supportsHDR :1;
/*!
* Describes the supported Color Key selects and blending modes for
* match and nomatch layer pixels.
*/
struct NvKmsCompositionCapabilities composition;
/*!
* Which NvKmsSurfaceMemoryFormat enum values are supported by the NVKMS
* device on the given scanout surface layer.
*
* Iff a particular enum NvKmsSurfaceMemoryFormat 'value' is supported,
* then (1 << value) will be set in the appropriate bitmask.
*
* Note that these bitmasks just report the static SW/HW capabilities,
* and are a superset of the formats that IMP may allow. Clients are
* still expected to honor the NvKmsUsageBounds for each head.
*/
NvU64 supportedSurfaceMemoryFormats NV_ALIGN_BYTES(8);
};
/*!
* Surface layouts.
*
* BlockLinear is the NVIDIA GPU native tiling format, arranging pixels into
* blocks or tiles for better locality during common GPU operations.
*
* Pitch is the naive "linear" surface layout with pixels laid out sequentially
* in memory line-by-line, optionally with some padding at the end of each line
* for alignment purposes.
*/
enum NvKmsSurfaceMemoryLayout {
NvKmsSurfaceMemoryLayoutBlockLinear = 0,
NvKmsSurfaceMemoryLayoutPitch = 1,
};
static inline const char *NvKmsSurfaceMemoryLayoutToString(
enum NvKmsSurfaceMemoryLayout layout)
{
switch (layout) {
default:
return "Unknown";
case NvKmsSurfaceMemoryLayoutBlockLinear:
return "BlockLinear";
case NvKmsSurfaceMemoryLayoutPitch:
return "Pitch";
}
}
typedef enum {
MUX_STATE_GET = 0,
MUX_STATE_INTEGRATED = 1,
MUX_STATE_DISCRETE = 2,
MUX_STATE_UNKNOWN = 3,
} NvMuxState;
enum NvKmsRotation {
NVKMS_ROTATION_0 = 0,
NVKMS_ROTATION_90 = 1,
NVKMS_ROTATION_180 = 2,
NVKMS_ROTATION_270 = 3,
NVKMS_ROTATION_MIN = NVKMS_ROTATION_0,
NVKMS_ROTATION_MAX = NVKMS_ROTATION_270,
};
struct NvKmsRRParams {
enum NvKmsRotation rotation;
NvBool reflectionX;
NvBool reflectionY;
};
/*!
* Convert each possible NvKmsRRParams to a unique integer [0..15],
* so that we can describe possible NvKmsRRParams with an NvU16 bitmask.
*
* E.g.
* rotation = 0, reflectionX = F, reflectionY = F == 0|0|0 == 0
* ...
* rotation = 270, reflectionX = T, reflectionY = T == 3|4|8 == 15
*/
static inline NvU8 NvKmsRRParamsToCapBit(const struct NvKmsRRParams *rrParams)
{
NvU8 bitPosition = (NvU8)rrParams->rotation;
if (rrParams->reflectionX) {
bitPosition |= NVBIT(2);
}
if (rrParams->reflectionY) {
bitPosition |= NVBIT(3);
}
return bitPosition;
}
/*
* NVKMS_MEMORY_ISO is used to tag surface memory that will be accessed via
* display's isochronous interface. Examples of this type of memory are pixel
* data and LUT entries.
*
* NVKMS_MEMORY_NISO is used to tag surface memory that will be accessed via
* display's non-isochronous interface. Examples of this type of memory are
* semaphores and notifiers.
*/
typedef enum {
NVKMS_MEMORY_ISO = 0,
NVKMS_MEMORY_NISO = 1,
} NvKmsMemoryIsoType;
typedef struct {
NvBool coherent;
NvBool noncoherent;
} NvKmsDispIOCoherencyModes;
#endif /* NVKMS_API_TYPES_H */

View File

@@ -0,0 +1,125 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2019 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#if !defined(NVKMS_FORMAT_H)
#define NVKMS_FORMAT_H
#ifdef __cplusplus
extern "C" {
#endif
#include "nvtypes.h"
/*
* In order to interpret these pixel format namings, please take note of these
* conventions:
* - The Y8_U8__Y8_V8_N422 and U8_Y8__V8_Y8_N422 formats are both packed formats
* that have an interleaved chroma component across every two pixels. The
* double-underscore is a separator between these two pixel groups.
* - The triple-underscore is a separator between planes.
* - The 'N' suffix is a delimiter for the chroma decimation factor.
*
* As examples of the above rules:
* - The Y8_U8__Y8_V8_N422 format has one 8-bit luma component (Y8) and one
* 8-bit chroma component (U8) in pixel N, and one 8-bit luma component (Y8)
* and one 8-bit chroma component (V8) in pixel (N + 1). This format is
* 422-decimated since the U and V chroma samples are shared between each
* pair of adjacent pixels per line.
* - The Y10___U10V10_N444 format has one plane of 10-bit luma (Y10) components,
* and another plane of 10-bit chroma components (U10V10). This format has no
* chroma decimation since the luma and chroma components are sampled at the
* same rate.
*/
enum NvKmsSurfaceMemoryFormat {
NvKmsSurfaceMemoryFormatI8 = 0,
NvKmsSurfaceMemoryFormatA1R5G5B5 = 1,
NvKmsSurfaceMemoryFormatX1R5G5B5 = 2,
NvKmsSurfaceMemoryFormatR5G6B5 = 3,
NvKmsSurfaceMemoryFormatA8R8G8B8 = 4,
NvKmsSurfaceMemoryFormatX8R8G8B8 = 5,
NvKmsSurfaceMemoryFormatA2B10G10R10 = 6,
NvKmsSurfaceMemoryFormatX2B10G10R10 = 7,
NvKmsSurfaceMemoryFormatA8B8G8R8 = 8,
NvKmsSurfaceMemoryFormatX8B8G8R8 = 9,
NvKmsSurfaceMemoryFormatRF16GF16BF16AF16 = 10,
NvKmsSurfaceMemoryFormatR16G16B16A16 = 11,
NvKmsSurfaceMemoryFormatRF32GF32BF32AF32 = 12,
NvKmsSurfaceMemoryFormatY8_U8__Y8_V8_N422 = 13,
NvKmsSurfaceMemoryFormatU8_Y8__V8_Y8_N422 = 14,
NvKmsSurfaceMemoryFormatY8___U8V8_N444 = 15,
NvKmsSurfaceMemoryFormatY8___V8U8_N444 = 16,
NvKmsSurfaceMemoryFormatY8___U8V8_N422 = 17,
NvKmsSurfaceMemoryFormatY8___V8U8_N422 = 18,
NvKmsSurfaceMemoryFormatY8___U8V8_N420 = 19,
NvKmsSurfaceMemoryFormatY8___V8U8_N420 = 20,
NvKmsSurfaceMemoryFormatY10___U10V10_N444 = 21,
NvKmsSurfaceMemoryFormatY10___V10U10_N444 = 22,
NvKmsSurfaceMemoryFormatY10___U10V10_N422 = 23,
NvKmsSurfaceMemoryFormatY10___V10U10_N422 = 24,
NvKmsSurfaceMemoryFormatY10___U10V10_N420 = 25,
NvKmsSurfaceMemoryFormatY10___V10U10_N420 = 26,
NvKmsSurfaceMemoryFormatY12___U12V12_N444 = 27,
NvKmsSurfaceMemoryFormatY12___V12U12_N444 = 28,
NvKmsSurfaceMemoryFormatY12___U12V12_N422 = 29,
NvKmsSurfaceMemoryFormatY12___V12U12_N422 = 30,
NvKmsSurfaceMemoryFormatY12___U12V12_N420 = 31,
NvKmsSurfaceMemoryFormatY12___V12U12_N420 = 32,
NvKmsSurfaceMemoryFormatY8___U8___V8_N444 = 33,
NvKmsSurfaceMemoryFormatY8___U8___V8_N420 = 34,
NvKmsSurfaceMemoryFormatMin = NvKmsSurfaceMemoryFormatI8,
NvKmsSurfaceMemoryFormatMax = NvKmsSurfaceMemoryFormatY8___U8___V8_N420,
};
typedef struct NvKmsSurfaceMemoryFormatInfo {
enum NvKmsSurfaceMemoryFormat format;
const char *name;
NvU8 depth;
NvBool isYUV;
NvU8 numPlanes;
union {
struct {
NvU8 bytesPerPixel;
NvU8 bitsPerPixel;
} rgb;
struct {
NvU8 depthPerComponent;
NvU8 storageBitsPerComponent;
NvU8 horizChromaDecimationFactor;
NvU8 vertChromaDecimationFactor;
} yuv;
};
} NvKmsSurfaceMemoryFormatInfo;
const NvKmsSurfaceMemoryFormatInfo *nvKmsGetSurfaceMemoryFormatInfo(
const enum NvKmsSurfaceMemoryFormat format);
const char *nvKmsSurfaceMemoryFormatToString(
const enum NvKmsSurfaceMemoryFormat format);
#ifdef __cplusplus
};
#endif
#endif /* NVKMS_FORMAT_H */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,59 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#pragma once
//
// This file was generated with FINN, an NVIDIA coding tool.
// Source file: nvlimits.finn
//
/*
* This is the maximum number of GPUs supported in a single system.
*/
#define NV_MAX_DEVICES 32
/*
* This is the maximum number of subdevices within a single device.
*/
#define NV_MAX_SUBDEVICES 8
/*
* This is the maximum length of the process name string.
*/
#define NV_PROC_NAME_MAX_LENGTH 100U
/*
* This is the maximum number of heads per GPU.
*/
#define NV_MAX_HEADS 4

View File

@@ -0,0 +1,915 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1993-2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/*
* nvmisc.h
*/
#ifndef __NV_MISC_H
#define __NV_MISC_H
#ifdef __cplusplus
extern "C" {
#endif //__cplusplus
#include "nvtypes.h"
#if !defined(NVIDIA_UNDEF_LEGACY_BIT_MACROS)
//
// Miscellaneous macros useful for bit field manipulations
//
// STUPID HACK FOR CL 19434692. Will revert when fix CL is delivered bfm -> chips_a.
#ifndef BIT
#define BIT(b) (1U<<(b))
#endif
#ifndef BIT32
#define BIT32(b) ((NvU32)1U<<(b))
#endif
#ifndef BIT64
#define BIT64(b) ((NvU64)1U<<(b))
#endif
#endif
//
// It is recommended to use the following bit macros to avoid macro name
// collisions with other src code bases.
//
#ifndef NVBIT
#define NVBIT(b) (1U<<(b))
#endif
#ifndef NVBIT_TYPE
#define NVBIT_TYPE(b, t) (((t)1U)<<(b))
#endif
#ifndef NVBIT32
#define NVBIT32(b) NVBIT_TYPE(b, NvU32)
#endif
#ifndef NVBIT64
#define NVBIT64(b) NVBIT_TYPE(b, NvU64)
#endif
// Helper macro's for 32 bit bitmasks
#define NV_BITMASK32_ELEMENT_SIZE (sizeof(NvU32) << 3)
#define NV_BITMASK32_IDX(chId) (((chId) & ~(0x1F)) >> 5)
#define NV_BITMASK32_OFFSET(chId) ((chId) & (0x1F))
#define NV_BITMASK32_SET(pChannelMask, chId) \
(pChannelMask)[NV_BITMASK32_IDX(chId)] |= NVBIT(NV_BITMASK32_OFFSET(chId))
#define NV_BITMASK32_GET(pChannelMask, chId) \
((pChannelMask)[NV_BITMASK32_IDX(chId)] & NVBIT(NV_BITMASK32_OFFSET(chId)))
// Index of the 'on' bit (assuming that there is only one).
// Even if multiple bits are 'on', result is in range of 0-31.
#define BIT_IDX_32(n) \
(((((n) & 0xFFFF0000U) != 0U) ? 0x10U: 0U) | \
((((n) & 0xFF00FF00U) != 0U) ? 0x08U: 0U) | \
((((n) & 0xF0F0F0F0U) != 0U) ? 0x04U: 0U) | \
((((n) & 0xCCCCCCCCU) != 0U) ? 0x02U: 0U) | \
((((n) & 0xAAAAAAAAU) != 0U) ? 0x01U: 0U) )
// Index of the 'on' bit (assuming that there is only one).
// Even if multiple bits are 'on', result is in range of 0-63.
#define BIT_IDX_64(n) \
(((((n) & 0xFFFFFFFF00000000ULL) != 0U) ? 0x20U: 0U) | \
((((n) & 0xFFFF0000FFFF0000ULL) != 0U) ? 0x10U: 0U) | \
((((n) & 0xFF00FF00FF00FF00ULL) != 0U) ? 0x08U: 0U) | \
((((n) & 0xF0F0F0F0F0F0F0F0ULL) != 0U) ? 0x04U: 0U) | \
((((n) & 0xCCCCCCCCCCCCCCCCULL) != 0U) ? 0x02U: 0U) | \
((((n) & 0xAAAAAAAAAAAAAAAAULL) != 0U) ? 0x01U: 0U) )
/*!
* DRF MACRO README:
*
* Glossary:
* DRF: Device, Register, Field
* FLD: Field
* REF: Reference
*
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA 0xDEADBEEF
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_GAMMA 27:0
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA 31:28
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_ZERO 0x00000000
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_ONE 0x00000001
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_TWO 0x00000002
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_THREE 0x00000003
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_FOUR 0x00000004
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_FIVE 0x00000005
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_SIX 0x00000006
* #define NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA_SEVEN 0x00000007
*
*
* Device = _DEVICE_OMEGA
* This is the common "base" that a group of registers in a manual share
*
* Register = _REGISTER_ALPHA
* Register for a given block of defines is the common root for one or more fields and constants
*
* Field(s) = _FIELD_GAMMA, _FIELD_ZETA
* These are the bit ranges for a given field within the register
* Fields are not required to have defined constant values (enumerations)
*
* Constant(s) = _ZERO, _ONE, _TWO, ...
* These are named values (enums) a field can contain; the width of the constants should not be larger than the field width
*
* MACROS:
*
* DRF_SHIFT:
* Bit index of the lower bound of a field
* DRF_SHIFT(NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA) == 28
*
* DRF_SHIFT_RT:
* Bit index of the higher bound of a field
* DRF_SHIFT_RT(NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA) == 31
*
* DRF_MASK:
* Produces a mask of 1-s equal to the width of a field
* DRF_MASK(NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA) == 0xF (four 1s starting at bit 0)
*
* DRF_SHIFTMASK:
* Produces a mask of 1s equal to the width of a field at the location of the field
* DRF_SHIFTMASK(NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA) == 0xF0000000
*
* DRF_DEF:
* Shifts a field constant's value to the correct field offset
* DRF_DEF(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, _THREE) == 0x30000000
*
* DRF_NUM:
* Shifts a number to the location of a particular field
* DRF_NUM(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, 3) == 0x30000000
* NOTE: If the value passed in is wider than the field, the value's high bits will be truncated
*
* DRF_SIZE:
* Provides the width of the field in bits
* DRF_SIZE(NV_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA) == 4
*
* DRF_VAL:
* Provides the value of an input within the field specified
* DRF_VAL(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, 0xABCD1234) == 0xA
* This is sort of like the inverse of DRF_NUM
*
* DRF_IDX...:
* These macros are similar to the above but for fields that accept an index argumment
*
* FLD_SET_DRF:
* Set the field bits in a given value with the given field constant
* NvU32 x = 0x00001234;
* x = FLD_SET_DRF(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, _THREE, x);
* x == 0x30001234;
*
* FLD_SET_DRF_NUM:
* Same as FLD_SET_DRF but instead of using a field constant a literal/variable is passed in
* NvU32 x = 0x00001234;
* x = FLD_SET_DRF_NUM(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, 0xF, x);
* x == 0xF0001234;
*
* FLD_IDX...:
* These macros are similar to the above but for fields that accept an index argumment
*
* FLD_TEST_DRF:
* Test if location specified by drf in 'v' has the same value as NV_drfc
* FLD_TEST_DRF(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, _THREE, 0x3000ABCD) == NV_TRUE
*
* FLD_TEST_DRF_NUM:
* Test if locations specified by drf in 'v' have the same value as n
* FLD_TEST_DRF_NUM(_DEVICE_OMEGA, _REGISTER_ALPHA, _FIELD_ZETA, 0x3, 0x3000ABCD) == NV_TRUE
*
* REF_DEF:
* Like DRF_DEF but maintains full symbol name (use in cases where "NV" is not prefixed to the field)
* REF_DEF(SOME_OTHER_PREFIX_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA, _THREE) == 0x30000000
*
* REF_VAL:
* Like DRF_VAL but maintains full symbol name (use in cases where "NV" is not prefixed to the field)
* REF_VAL(SOME_OTHER_PREFIX_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA, 0xABCD1234) == 0xA
*
* REF_NUM:
* Like DRF_NUM but maintains full symbol name (use in cases where "NV" is not prefixed to the field)
* REF_NUM(SOME_OTHER_PREFIX_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA, 0xA) == 0xA00000000
*
* FLD_SET_REF_NUM:
* Like FLD_SET_DRF_NUM but maintains full symbol name (use in cases where "NV" is not prefixed to the field)
* NvU32 x = 0x00001234;
* x = FLD_SET_REF_NUM(SOME_OTHER_PREFIX_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA, 0xF, x);
* x == 0xF0001234;
*
* FLD_TEST_REF:
* Like FLD_TEST_DRF but maintains full symbol name (use in cases where "NV" is not prefixed to the field)
* FLD_TEST_REF(SOME_OTHER_PREFIX_DEVICE_OMEGA_REGISTER_ALPHA_FIELD_ZETA, _THREE, 0x3000ABCD) == NV_TRUE
*
* Other macros:
* There a plethora of other macros below that extend the above (notably Multi-Word (MW), 64-bit, and some
* reg read/write variations). I hope these are self explanatory. If you have a need to use them, you
* probably have some knowledge of how they work.
*/
// tegra mobile uses nvmisc_macros.h and can't access nvmisc.h... and sometimes both get included.
#ifndef _NVMISC_MACROS_H
// Use Coverity Annotation to mark issues as false positives/ignore when using single bit defines.
#define DRF_ISBIT(bitval,drf) \
( /* coverity[identical_branches] */ \
(bitval != 0) ? drf )
#define DEVICE_BASE(d) (0?d) // what's up with this name? totally non-parallel to the macros below
#define DEVICE_EXTENT(d) (1?d) // what's up with this name? totally non-parallel to the macros below
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
#ifdef MISRA_14_3
#define DRF_BASE(drf) (drf##_LOW_FIELD)
#define DRF_EXTENT(drf) (drf##_HIGH_FIELD)
#define DRF_SHIFT(drf) ((drf##_LOW_FIELD) % 32U)
#define DRF_SHIFT_RT(drf) ((drf##_HIGH_FIELD) % 32U)
#define DRF_MASK(drf) (0xFFFFFFFFU >> (31U - ((drf##_HIGH_FIELD) % 32U) + ((drf##_LOW_FIELD) % 32U)))
#else
#define DRF_BASE(drf) (NV_FALSE?drf) // much better
#define DRF_EXTENT(drf) (NV_TRUE?drf) // much better
#define DRF_SHIFT(drf) (((NvU32)DRF_BASE(drf)) % 32U)
#define DRF_SHIFT_RT(drf) (((NvU32)DRF_EXTENT(drf)) % 32U)
#define DRF_MASK(drf) (0xFFFFFFFFU>>(31U - DRF_SHIFT_RT(drf) + DRF_SHIFT(drf)))
#endif
#define DRF_DEF(d,r,f,c) (((NvU32)(NV ## d ## r ## f ## c))<<DRF_SHIFT(NV ## d ## r ## f))
#define DRF_NUM(d,r,f,n) ((((NvU32)(n))&DRF_MASK(NV ## d ## r ## f))<<DRF_SHIFT(NV ## d ## r ## f))
#else
#define DRF_BASE(drf) (0?drf) // much better
#define DRF_EXTENT(drf) (1?drf) // much better
#define DRF_SHIFT(drf) ((DRF_ISBIT(0,drf)) % 32)
#define DRF_SHIFT_RT(drf) ((DRF_ISBIT(1,drf)) % 32)
#define DRF_MASK(drf) (0xFFFFFFFFU>>(31-((DRF_ISBIT(1,drf)) % 32)+((DRF_ISBIT(0,drf)) % 32)))
#define DRF_DEF(d,r,f,c) ((NV ## d ## r ## f ## c)<<DRF_SHIFT(NV ## d ## r ## f))
#define DRF_NUM(d,r,f,n) (((n)&DRF_MASK(NV ## d ## r ## f))<<DRF_SHIFT(NV ## d ## r ## f))
#endif
#define DRF_SHIFTMASK(drf) (DRF_MASK(drf)<<(DRF_SHIFT(drf)))
#define DRF_SIZE(drf) (DRF_EXTENT(drf)-DRF_BASE(drf)+1U)
#define DRF_VAL(d,r,f,v) (((v)>>DRF_SHIFT(NV ## d ## r ## f))&DRF_MASK(NV ## d ## r ## f))
#endif
// Signed version of DRF_VAL, which takes care of extending sign bit.
#define DRF_VAL_SIGNED(d,r,f,v) (((DRF_VAL(d,r,f,(v)) ^ (NVBIT(DRF_SIZE(NV ## d ## r ## f)-1U)))) - (NVBIT(DRF_SIZE(NV ## d ## r ## f)-1U)))
#define DRF_IDX_DEF(d,r,f,i,c) ((NV ## d ## r ## f ## c)<<DRF_SHIFT(NV##d##r##f(i)))
#define DRF_IDX_OFFSET_DEF(d,r,f,i,o,c) ((NV ## d ## r ## f ## c)<<DRF_SHIFT(NV##d##r##f(i,o)))
#define DRF_IDX_NUM(d,r,f,i,n) (((n)&DRF_MASK(NV##d##r##f(i)))<<DRF_SHIFT(NV##d##r##f(i)))
#define DRF_IDX_VAL(d,r,f,i,v) (((v)>>DRF_SHIFT(NV##d##r##f(i)))&DRF_MASK(NV##d##r##f(i)))
#define DRF_IDX_OFFSET_VAL(d,r,f,i,o,v) (((v)>>DRF_SHIFT(NV##d##r##f(i,o)))&DRF_MASK(NV##d##r##f(i,o)))
// Fractional version of DRF_VAL which reads Fx.y fixed point number (x.y)*z
#define DRF_VAL_FRAC(d,r,x,y,v,z) ((DRF_VAL(d,r,x,(v))*z) + ((DRF_VAL(d,r,y,v)*z) / (1<<DRF_SIZE(NV##d##r##y))))
//
// 64 Bit Versions
//
#define DRF_SHIFT64(drf) ((DRF_ISBIT(0,drf)) % 64)
#define DRF_MASK64(drf) (NV_U64_MAX>>(63-((DRF_ISBIT(1,drf)) % 64)+((DRF_ISBIT(0,drf)) % 64)))
#define DRF_SHIFTMASK64(drf) (DRF_MASK64(drf)<<(DRF_SHIFT64(drf)))
#define DRF_DEF64(d,r,f,c) (((NvU64)(NV ## d ## r ## f ## c))<<DRF_SHIFT64(NV ## d ## r ## f))
#define DRF_NUM64(d,r,f,n) ((((NvU64)(n))&DRF_MASK64(NV ## d ## r ## f))<<DRF_SHIFT64(NV ## d ## r ## f))
#define DRF_VAL64(d,r,f,v) ((((NvU64)(v))>>DRF_SHIFT64(NV ## d ## r ## f))&DRF_MASK64(NV ## d ## r ## f))
#define DRF_VAL_SIGNED64(d,r,f,v) (((DRF_VAL64(d,r,f,(v)) ^ (NVBIT64(DRF_SIZE(NV ## d ## r ## f)-1)))) - (NVBIT64(DRF_SIZE(NV ## d ## r ## f)-1)))
#define DRF_IDX_DEF64(d,r,f,i,c) (((NvU64)(NV ## d ## r ## f ## c))<<DRF_SHIFT64(NV##d##r##f(i)))
#define DRF_IDX_OFFSET_DEF64(d,r,f,i,o,c) ((NvU64)(NV ## d ## r ## f ## c)<<DRF_SHIFT64(NV##d##r##f(i,o)))
#define DRF_IDX_NUM64(d,r,f,i,n) ((((NvU64)(n))&DRF_MASK64(NV##d##r##f(i)))<<DRF_SHIFT64(NV##d##r##f(i)))
#define DRF_IDX_VAL64(d,r,f,i,v) ((((NvU64)(v))>>DRF_SHIFT64(NV##d##r##f(i)))&DRF_MASK64(NV##d##r##f(i)))
#define DRF_IDX_OFFSET_VAL64(d,r,f,i,o,v) (((NvU64)(v)>>DRF_SHIFT64(NV##d##r##f(i,o)))&DRF_MASK64(NV##d##r##f(i,o)))
#define FLD_SET_DRF64(d,r,f,c,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f)) | DRF_DEF64(d,r,f,c))
#define FLD_SET_DRF_NUM64(d,r,f,n,v) ((((NvU64)(v)) & ~DRF_SHIFTMASK64(NV##d##r##f)) | DRF_NUM64(d,r,f,n))
#define FLD_IDX_SET_DRF64(d,r,f,i,c,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f(i))) | DRF_IDX_DEF64(d,r,f,i,c))
#define FLD_IDX_OFFSET_SET_DRF64(d,r,f,i,o,c,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f(i,o))) | DRF_IDX_OFFSET_DEF64(d,r,f,i,o,c))
#define FLD_IDX_SET_DRF_DEF64(d,r,f,i,c,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f(i))) | DRF_IDX_DEF64(d,r,f,i,c))
#define FLD_IDX_SET_DRF_NUM64(d,r,f,i,n,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f(i))) | DRF_IDX_NUM64(d,r,f,i,n))
#define FLD_SET_DRF_IDX64(d,r,f,c,i,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(NV##d##r##f)) | DRF_DEF64(d,r,f,c(i)))
#define FLD_TEST_DRF64(d,r,f,c,v) (DRF_VAL64(d, r, f, (v)) == NV##d##r##f##c)
#define FLD_TEST_DRF_AND64(d,r,f,c,v) (DRF_VAL64(d, r, f, (v)) & NV##d##r##f##c)
#define FLD_TEST_DRF_NUM64(d,r,f,n,v) (DRF_VAL64(d, r, f, (v)) == (n))
#define FLD_IDX_TEST_DRF64(d,r,f,i,c,v) (DRF_IDX_VAL64(d, r, f, i, (v)) == NV##d##r##f##c)
#define FLD_IDX_OFFSET_TEST_DRF64(d,r,f,i,o,c,v) (DRF_IDX_OFFSET_VAL64(d, r, f, i, o, (v)) == NV##d##r##f##c)
#define REF_DEF64(drf,d) (((drf ## d)&DRF_MASK64(drf))<<DRF_SHIFT64(drf))
#define REF_VAL64(drf,v) (((NvU64)(v)>>DRF_SHIFT64(drf))&DRF_MASK64(drf))
#if defined(NV_MISRA_COMPLIANCE_REQUIRED) && defined(MISRA_14_3)
#define REF_NUM64(drf,n) (((NvU64)(n)&(0xFFFFFFFFFFFFFFFFU>>(63U-((drf##_HIGH_FIELD) % 63U)+((drf##_LOW_FIELD) % 63U)))) << ((drf##_LOW_FIELD) % 63U))
#else
#define REF_NUM64(drf,n) (((NvU64)(n)&DRF_MASK64(drf))<<DRF_SHIFT64(drf))
#endif
#define FLD_TEST_REF64(drf,c,v) (REF_VAL64(drf, v) == drf##c)
#define FLD_TEST_REF_AND64(drf,c,v) (REF_VAL64(drf, v) & drf##c)
#define FLD_SET_REF_NUM64(drf,n,v) (((NvU64)(v) & ~DRF_SHIFTMASK64(drf)) | REF_NUM64(drf,n))
//
// 32 Bit Versions
//
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
#define FLD_SET_DRF(d,r,f,c,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_DEF(d,r,f,c))
#define FLD_SET_DRF_NUM(d,r,f,n,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_NUM(d,r,f,n))
#define FLD_IDX_SET_DRF(d,r,f,i,c,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_DEF(d,r,f,i,c))
#define FLD_IDX_OFFSET_SET_DRF(d,r,f,i,o,c,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f(i,o))) | DRF_IDX_OFFSET_DEF(d,r,f,i,o,c))
#define FLD_IDX_SET_DRF_DEF(d,r,f,i,c,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_DEF(d,r,f,i,c))
#define FLD_IDX_SET_DRF_NUM(d,r,f,i,n,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_NUM(d,r,f,i,n))
#define FLD_SET_DRF_IDX(d,r,f,c,i,v) (((NvU32)(v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_DEF(d,r,f,c(i)))
#define FLD_TEST_DRF(d,r,f,c,v) ((DRF_VAL(d, r, f, (v)) == (NvU32)(NV##d##r##f##c)))
#define FLD_TEST_DRF_AND(d,r,f,c,v) ((DRF_VAL(d, r, f, (v)) & (NvU32)(NV##d##r##f##c)) != 0U)
#define FLD_TEST_DRF_NUM(d,r,f,n,v) ((DRF_VAL(d, r, f, (v)) == (NvU32)(n)))
#define FLD_IDX_TEST_DRF(d,r,f,i,c,v) ((DRF_IDX_VAL(d, r, f, i, (v)) == (NvU32)(NV##d##r##f##c)))
#define FLD_IDX_OFFSET_TEST_DRF(d,r,f,i,o,c,v) ((DRF_IDX_OFFSET_VAL(d, r, f, i, o, (v)) == (NvU32)(NV##d##r##f##c)))
#else
#define FLD_SET_DRF(d,r,f,c,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_DEF(d,r,f,c))
#define FLD_SET_DRF_NUM(d,r,f,n,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_NUM(d,r,f,n))
#define FLD_IDX_SET_DRF(d,r,f,i,c,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_DEF(d,r,f,i,c))
#define FLD_IDX_OFFSET_SET_DRF(d,r,f,i,o,c,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f(i,o))) | DRF_IDX_OFFSET_DEF(d,r,f,i,o,c))
#define FLD_IDX_SET_DRF_DEF(d,r,f,i,c,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_DEF(d,r,f,i,c))
#define FLD_IDX_SET_DRF_NUM(d,r,f,i,n,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f(i))) | DRF_IDX_NUM(d,r,f,i,n))
#define FLD_SET_DRF_IDX(d,r,f,c,i,v) (((v) & ~DRF_SHIFTMASK(NV##d##r##f)) | DRF_DEF(d,r,f,c(i)))
#define FLD_TEST_DRF(d,r,f,c,v) ((DRF_VAL(d, r, f, (v)) == NV##d##r##f##c))
#define FLD_TEST_DRF_AND(d,r,f,c,v) ((DRF_VAL(d, r, f, (v)) & NV##d##r##f##c))
#define FLD_TEST_DRF_NUM(d,r,f,n,v) ((DRF_VAL(d, r, f, (v)) == (n)))
#define FLD_IDX_TEST_DRF(d,r,f,i,c,v) ((DRF_IDX_VAL(d, r, f, i, (v)) == NV##d##r##f##c))
#define FLD_IDX_OFFSET_TEST_DRF(d,r,f,i,o,c,v) ((DRF_IDX_OFFSET_VAL(d, r, f, i, o, (v)) == NV##d##r##f##c))
#endif
#define REF_DEF(drf,d) (((drf ## d)&DRF_MASK(drf))<<DRF_SHIFT(drf))
#define REF_VAL(drf,v) (((v)>>DRF_SHIFT(drf))&DRF_MASK(drf))
#if defined(NV_MISRA_COMPLIANCE_REQUIRED) && defined(MISRA_14_3)
#define REF_NUM(drf,n) (((n)&(0xFFFFFFFFU>>(31U-((drf##_HIGH_FIELD) % 32U)+((drf##_LOW_FIELD) % 32U)))) << ((drf##_LOW_FIELD) % 32U))
#else
#define REF_NUM(drf,n) (((n)&DRF_MASK(drf))<<DRF_SHIFT(drf))
#endif
#define FLD_TEST_REF(drf,c,v) (REF_VAL(drf, (v)) == drf##c)
#define FLD_TEST_REF_AND(drf,c,v) (REF_VAL(drf, (v)) & drf##c)
#define FLD_SET_REF_NUM(drf,n,v) (((v) & ~DRF_SHIFTMASK(drf)) | REF_NUM(drf,n))
#define CR_DRF_DEF(d,r,f,c) ((CR ## d ## r ## f ## c)<<DRF_SHIFT(CR ## d ## r ## f))
#define CR_DRF_NUM(d,r,f,n) (((n)&DRF_MASK(CR ## d ## r ## f))<<DRF_SHIFT(CR ## d ## r ## f))
#define CR_DRF_VAL(d,r,f,v) (((v)>>DRF_SHIFT(CR ## d ## r ## f))&DRF_MASK(CR ## d ## r ## f))
// Multi-word (MW) field manipulations. For multi-word structures (e.g., Fermi SPH),
// fields may have bit numbers beyond 32. To avoid errors using "classic" multi-word macros,
// all the field extents are defined as "MW(X)". For example, MW(127:96) means
// the field is in bits 0-31 of word number 3 of the structure.
//
// DRF_VAL_MW() macro is meant to be used for native endian 32-bit aligned 32-bit word data,
// not for byte stream data.
//
// DRF_VAL_BS() macro is for byte stream data used in fbQueryBIOS_XXX().
//
#define DRF_EXPAND_MW(drf) drf // used to turn "MW(a:b)" into "a:b"
#define DRF_PICK_MW(drf,v) ((v)? DRF_EXPAND_##drf) // picks low or high bits
#define DRF_WORD_MW(drf) (DRF_PICK_MW(drf,0)/32) // which word in a multi-word array
#define DRF_BASE_MW(drf) (DRF_PICK_MW(drf,0)%32) // which start bit in the selected word?
#define DRF_EXTENT_MW(drf) (DRF_PICK_MW(drf,1)%32) // which end bit in the selected word
#define DRF_SHIFT_MW(drf) (DRF_PICK_MW(drf,0)%32)
#define DRF_MASK_MW(drf) (0xFFFFFFFFU>>((31-(DRF_EXTENT_MW(drf))+(DRF_BASE_MW(drf)))%32))
#define DRF_SHIFTMASK_MW(drf) ((DRF_MASK_MW(drf))<<(DRF_SHIFT_MW(drf)))
#define DRF_SIZE_MW(drf) (DRF_EXTENT_MW(drf)-DRF_BASE_MW(drf)+1)
#define DRF_DEF_MW(d,r,f,c) ((NV##d##r##f##c) << DRF_SHIFT_MW(NV##d##r##f))
#define DRF_NUM_MW(d,r,f,n) (((n)&DRF_MASK_MW(NV##d##r##f))<<DRF_SHIFT_MW(NV##d##r##f))
//
// DRF_VAL_MW is the ONLY multi-word macro which supports spanning. No other MW macro supports spanning currently
//
#define DRF_VAL_MW_1WORD(d,r,f,v) ((((v)[DRF_WORD_MW(NV##d##r##f)])>>DRF_SHIFT_MW(NV##d##r##f))&DRF_MASK_MW(NV##d##r##f))
#define DRF_SPANS(drf) ((DRF_PICK_MW(drf,0)/32) != (DRF_PICK_MW(drf,1)/32))
#define DRF_WORD_MW_LOW(drf) (DRF_PICK_MW(drf,0)/32)
#define DRF_WORD_MW_HIGH(drf) (DRF_PICK_MW(drf,1)/32)
#define DRF_MASK_MW_LOW(drf) (0xFFFFFFFFU)
#define DRF_MASK_MW_HIGH(drf) (0xFFFFFFFFU>>(31-(DRF_EXTENT_MW(drf))))
#define DRF_SHIFT_MW_LOW(drf) (DRF_PICK_MW(drf,0)%32)
#define DRF_SHIFT_MW_HIGH(drf) (0)
#define DRF_MERGE_SHIFT(drf) ((32-((DRF_PICK_MW(drf,0)%32)))%32)
#define DRF_VAL_MW_2WORD(d,r,f,v) (((((v)[DRF_WORD_MW_LOW(NV##d##r##f)])>>DRF_SHIFT_MW_LOW(NV##d##r##f))&DRF_MASK_MW_LOW(NV##d##r##f)) | \
(((((v)[DRF_WORD_MW_HIGH(NV##d##r##f)])>>DRF_SHIFT_MW_HIGH(NV##d##r##f))&DRF_MASK_MW_HIGH(NV##d##r##f)) << DRF_MERGE_SHIFT(NV##d##r##f)))
#define DRF_VAL_MW(d,r,f,v) ( DRF_SPANS(NV##d##r##f) ? DRF_VAL_MW_2WORD(d,r,f,v) : DRF_VAL_MW_1WORD(d,r,f,v) )
#define DRF_IDX_DEF_MW(d,r,f,i,c) ((NV##d##r##f##c)<<DRF_SHIFT_MW(NV##d##r##f(i)))
#define DRF_IDX_NUM_MW(d,r,f,i,n) (((n)&DRF_MASK_MW(NV##d##r##f(i)))<<DRF_SHIFT_MW(NV##d##r##f(i)))
#define DRF_IDX_VAL_MW(d,r,f,i,v) ((((v)[DRF_WORD_MW(NV##d##r##f(i))])>>DRF_SHIFT_MW(NV##d##r##f(i)))&DRF_MASK_MW(NV##d##r##f(i)))
//
// Logically OR all DRF_DEF constants indexed from zero to s (semiinclusive).
// Caution: Target variable v must be pre-initialized.
//
#define FLD_IDX_OR_DRF_DEF(d,r,f,c,s,v) \
do \
{ NvU32 idx; \
for (idx = 0; idx < (NV ## d ## r ## f ## s); ++idx)\
{ \
v |= DRF_IDX_DEF(d,r,f,idx,c); \
} \
} while(0)
#define FLD_MERGE_MW(drf,n,v) (((v)[DRF_WORD_MW(drf)] & ~DRF_SHIFTMASK_MW(drf)) | n)
#define FLD_ASSIGN_MW(drf,n,v) ((v)[DRF_WORD_MW(drf)] = FLD_MERGE_MW(drf, n, v))
#define FLD_IDX_MERGE_MW(drf,i,n,v) (((v)[DRF_WORD_MW(drf(i))] & ~DRF_SHIFTMASK_MW(drf(i))) | n)
#define FLD_IDX_ASSIGN_MW(drf,i,n,v) ((v)[DRF_WORD_MW(drf(i))] = FLD_MERGE_MW(drf(i), n, v))
#define FLD_SET_DRF_MW(d,r,f,c,v) FLD_MERGE_MW(NV##d##r##f, DRF_DEF_MW(d,r,f,c), v)
#define FLD_SET_DRF_NUM_MW(d,r,f,n,v) FLD_ASSIGN_MW(NV##d##r##f, DRF_NUM_MW(d,r,f,n), v)
#define FLD_SET_DRF_DEF_MW(d,r,f,c,v) FLD_ASSIGN_MW(NV##d##r##f, DRF_DEF_MW(d,r,f,c), v)
#define FLD_IDX_SET_DRF_MW(d,r,f,i,c,v) FLD_IDX_MERGE_MW(NV##d##r##f, i, DRF_IDX_DEF_MW(d,r,f,i,c), v)
#define FLD_IDX_SET_DRF_DEF_MW(d,r,f,i,c,v) FLD_IDX_MERGE_MW(NV##d##r##f, i, DRF_IDX_DEF_MW(d,r,f,i,c), v)
#define FLD_IDX_SET_DRF_NUM_MW(d,r,f,i,n,v) FLD_IDX_ASSIGN_MW(NV##d##r##f, i, DRF_IDX_NUM_MW(d,r,f,i,n), v)
#define FLD_TEST_DRF_MW(d,r,f,c,v) ((DRF_VAL_MW(d, r, f, (v)) == NV##d##r##f##c))
#define FLD_TEST_DRF_NUM_MW(d,r,f,n,v) ((DRF_VAL_MW(d, r, f, (v)) == n))
#define FLD_IDX_TEST_DRF_MW(d,r,f,i,c,v) ((DRF_IDX_VAL_MW(d, r, f, i, (v)) == NV##d##r##f##c))
#define DRF_VAL_BS(d,r,f,v) ( DRF_SPANS(NV##d##r##f) ? DRF_VAL_BS_2WORD(d,r,f,(v)) : DRF_VAL_BS_1WORD(d,r,f,(v)) )
//------------------------------------------------------------------------//
// //
// Common defines for engine register reference wrappers //
// //
// New engine addressing can be created like: //
// \#define ENG_REG_PMC(o,d,r) NV##d##r //
// \#define ENG_IDX_REG_CE(o,d,i,r) CE_MAP(o,r,i) //
// //
// See FB_FBPA* for more examples //
//------------------------------------------------------------------------//
#define ENG_RD_REG(g,o,d,r) GPU_REG_RD32(g, ENG_REG##d(o,d,r))
#define ENG_WR_REG(g,o,d,r,v) GPU_REG_WR32(g, ENG_REG##d(o,d,r), (v))
#define ENG_RD_DRF(g,o,d,r,f) ((GPU_REG_RD32(g, ENG_REG##d(o,d,r))>>GPU_DRF_SHIFT(NV ## d ## r ## f))&GPU_DRF_MASK(NV ## d ## r ## f))
#define ENG_WR_DRF_DEF(g,o,d,r,f,c) GPU_REG_WR32(g, ENG_REG##d(o,d,r),(GPU_REG_RD32(g,ENG_REG##d(o,d,r))&~(GPU_DRF_MASK(NV##d##r##f)<<GPU_DRF_SHIFT(NV##d##r##f)))|GPU_DRF_DEF(d,r,f,c))
#define ENG_WR_DRF_NUM(g,o,d,r,f,n) GPU_REG_WR32(g, ENG_REG##d(o,d,r),(GPU_REG_RD32(g,ENG_REG##d(o,d,r))&~(GPU_DRF_MASK(NV##d##r##f)<<GPU_DRF_SHIFT(NV##d##r##f)))|GPU_DRF_NUM(d,r,f,n))
#define ENG_TEST_DRF_DEF(g,o,d,r,f,c) (ENG_RD_DRF(g, o, d, r, f) == NV##d##r##f##c)
#define ENG_RD_IDX_DRF(g,o,d,r,f,i) ((GPU_REG_RD32(g, ENG_REG##d(o,d,r(i)))>>GPU_DRF_SHIFT(NV ## d ## r ## f))&GPU_DRF_MASK(NV ## d ## r ## f))
#define ENG_TEST_IDX_DRF_DEF(g,o,d,r,f,c,i) (ENG_RD_IDX_DRF(g, o, d, r, f, (i)) == NV##d##r##f##c)
#define ENG_IDX_RD_REG(g,o,d,i,r) GPU_REG_RD32(g, ENG_IDX_REG##d(o,d,i,r))
#define ENG_IDX_WR_REG(g,o,d,i,r,v) GPU_REG_WR32(g, ENG_IDX_REG##d(o,d,i,r), (v))
#define ENG_IDX_RD_DRF(g,o,d,i,r,f) ((GPU_REG_RD32(g, ENG_IDX_REG##d(o,d,i,r))>>GPU_DRF_SHIFT(NV ## d ## r ## f))&GPU_DRF_MASK(NV ## d ## r ## f))
//
// DRF_READ_1WORD_BS() and DRF_READ_1WORD_BS_HIGH() do not read beyond the bytes that contain
// the requested value. Reading beyond the actual data causes a page fault panic when the
// immediately following page happened to be protected or not mapped.
//
#define DRF_VAL_BS_1WORD(d,r,f,v) ((DRF_READ_1WORD_BS(d,r,f,v)>>DRF_SHIFT_MW(NV##d##r##f))&DRF_MASK_MW(NV##d##r##f))
#define DRF_VAL_BS_2WORD(d,r,f,v) (((DRF_READ_4BYTE_BS(NV##d##r##f,v)>>DRF_SHIFT_MW_LOW(NV##d##r##f))&DRF_MASK_MW_LOW(NV##d##r##f)) | \
(((DRF_READ_1WORD_BS_HIGH(d,r,f,v)>>DRF_SHIFT_MW_HIGH(NV##d##r##f))&DRF_MASK_MW_HIGH(NV##d##r##f)) << DRF_MERGE_SHIFT(NV##d##r##f)))
#define DRF_READ_1BYTE_BS(drf,v) ((NvU32)(((const NvU8*)(v))[DRF_WORD_MW(drf)*4]))
#define DRF_READ_2BYTE_BS(drf,v) (DRF_READ_1BYTE_BS(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW(drf)*4+1])<<8))
#define DRF_READ_3BYTE_BS(drf,v) (DRF_READ_2BYTE_BS(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW(drf)*4+2])<<16))
#define DRF_READ_4BYTE_BS(drf,v) (DRF_READ_3BYTE_BS(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW(drf)*4+3])<<24))
#define DRF_READ_1BYTE_BS_HIGH(drf,v) ((NvU32)(((const NvU8*)(v))[DRF_WORD_MW_HIGH(drf)*4]))
#define DRF_READ_2BYTE_BS_HIGH(drf,v) (DRF_READ_1BYTE_BS_HIGH(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW_HIGH(drf)*4+1])<<8))
#define DRF_READ_3BYTE_BS_HIGH(drf,v) (DRF_READ_2BYTE_BS_HIGH(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW_HIGH(drf)*4+2])<<16))
#define DRF_READ_4BYTE_BS_HIGH(drf,v) (DRF_READ_3BYTE_BS_HIGH(drf,v)| \
((NvU32)(((const NvU8*)(v))[DRF_WORD_MW_HIGH(drf)*4+3])<<24))
// Calculate 2^n - 1 and avoid shift counter overflow
//
// On Windows amd64, 64 << 64 => 1
//
#define NV_TWO_N_MINUS_ONE(n) (((1ULL<<(n/2))<<((n+1)/2))-1)
#define DRF_READ_1WORD_BS(d,r,f,v) \
((DRF_EXTENT_MW(NV##d##r##f)<8)?DRF_READ_1BYTE_BS(NV##d##r##f,(v)): \
((DRF_EXTENT_MW(NV##d##r##f)<16)?DRF_READ_2BYTE_BS(NV##d##r##f,(v)): \
((DRF_EXTENT_MW(NV##d##r##f)<24)?DRF_READ_3BYTE_BS(NV##d##r##f,(v)): \
DRF_READ_4BYTE_BS(NV##d##r##f,(v)))))
#define DRF_READ_1WORD_BS_HIGH(d,r,f,v) \
((DRF_EXTENT_MW(NV##d##r##f)<8)?DRF_READ_1BYTE_BS_HIGH(NV##d##r##f,(v)): \
((DRF_EXTENT_MW(NV##d##r##f)<16)?DRF_READ_2BYTE_BS_HIGH(NV##d##r##f,(v)): \
((DRF_EXTENT_MW(NV##d##r##f)<24)?DRF_READ_3BYTE_BS_HIGH(NV##d##r##f,(v)): \
DRF_READ_4BYTE_BS_HIGH(NV##d##r##f,(v)))))
#define LOWESTBIT(x) ( (x) & (((x) - 1U) ^ (x)) )
// Destructive operation on n32
#define HIGHESTBIT(n32) \
{ \
HIGHESTBITIDX_32(n32); \
n32 = NVBIT(n32); \
}
#define ONEBITSET(x) ( ((x) != 0U) && (((x) & ((x) - 1U)) == 0U) )
// Destructive operation on n32
#define NUMSETBITS_32(n32) \
{ \
n32 = n32 - ((n32 >> 1) & 0x55555555); \
n32 = (n32 & 0x33333333) + ((n32 >> 2) & 0x33333333); \
n32 = (((n32 + (n32 >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24; \
}
/*!
* Calculate number of bits set in a 32-bit unsigned integer.
* Pure typesafe alternative to @ref NUMSETBITS_32.
*/
static NV_FORCEINLINE NvU32
nvPopCount32(const NvU32 x)
{
NvU32 temp = x;
temp = temp - ((temp >> 1) & 0x55555555U);
temp = (temp & 0x33333333U) + ((temp >> 2) & 0x33333333U);
temp = (((temp + (temp >> 4)) & 0x0F0F0F0FU) * 0x01010101U) >> 24;
return temp;
}
/*!
* Calculate number of bits set in a 64-bit unsigned integer.
*/
static NV_FORCEINLINE NvU32
nvPopCount64(const NvU64 x)
{
NvU64 temp = x;
temp = temp - ((temp >> 1) & 0x5555555555555555ULL);
temp = (temp & 0x3333333333333333ULL) + ((temp >> 2) & 0x3333333333333333ULL);
temp = (temp + (temp >> 4)) & 0x0F0F0F0F0F0F0F0FULL;
temp = (temp * 0x0101010101010101ULL) >> 56;
return (NvU32)temp;
}
/*!
* Determine how many bits are set below a bit index within a mask.
* This assigns a dense ordering to the set bits in the mask.
*
* For example the mask 0xCD contains 5 set bits:
* nvMaskPos32(0xCD, 0) == 0
* nvMaskPos32(0xCD, 2) == 1
* nvMaskPos32(0xCD, 3) == 2
* nvMaskPos32(0xCD, 6) == 3
* nvMaskPos32(0xCD, 7) == 4
*/
static NV_FORCEINLINE NvU32
nvMaskPos32(const NvU32 mask, const NvU32 bitIdx)
{
return nvPopCount32(mask & (NVBIT32(bitIdx) - 1U));
}
// Destructive operation on n32
#define LOWESTBITIDX_32(n32) \
{ \
n32 = BIT_IDX_32(LOWESTBIT(n32));\
}
// Destructive operation on n32
#define HIGHESTBITIDX_32(n32) \
{ \
NvU32 count = 0; \
while (n32 >>= 1) \
{ \
count++; \
} \
n32 = count; \
}
// Destructive operation on n32
#define ROUNDUP_POW2(n32) \
{ \
n32--; \
n32 |= n32 >> 1; \
n32 |= n32 >> 2; \
n32 |= n32 >> 4; \
n32 |= n32 >> 8; \
n32 |= n32 >> 16; \
n32++; \
}
/*!
* Round up a 32-bit unsigned integer to the next power of 2.
* Pure typesafe alternative to @ref ROUNDUP_POW2.
*
* param[in] x must be in range [0, 2^31] to avoid overflow.
*/
static NV_FORCEINLINE NvU32
nvNextPow2_U32(const NvU32 x)
{
NvU32 y = x;
y--;
y |= y >> 1;
y |= y >> 2;
y |= y >> 4;
y |= y >> 8;
y |= y >> 16;
y++;
return y;
}
static NV_FORCEINLINE NvU32
nvPrevPow2_U32(const NvU32 x )
{
NvU32 y = x;
y |= (y >> 1);
y |= (y >> 2);
y |= (y >> 4);
y |= (y >> 8);
y |= (y >> 16);
return y - (y >> 1);
}
static NV_FORCEINLINE NvU64
nvPrevPow2_U64(const NvU64 x )
{
NvU64 y = x;
y |= (y >> 1);
y |= (y >> 2);
y |= (y >> 4);
y |= (y >> 8);
y |= (y >> 16);
y |= (y >> 32);
return y - (y >> 1);
}
// Destructive operation on n64
#define ROUNDUP_POW2_U64(n64) \
{ \
n64--; \
n64 |= n64 >> 1; \
n64 |= n64 >> 2; \
n64 |= n64 >> 4; \
n64 |= n64 >> 8; \
n64 |= n64 >> 16; \
n64 |= n64 >> 32; \
n64++; \
}
#define NV_SWAP_U8(a,b) \
{ \
NvU8 temp; \
temp = a; \
a = b; \
b = temp; \
}
#define NV_SWAP_U32(a,b) \
{ \
NvU32 temp; \
temp = a; \
a = b; \
b = temp; \
}
/*!
* @brief Macros allowing simple iteration over bits set in a given mask.
*
* @param[in] maskWidth bit-width of the mask (allowed: 8, 16, 32, 64)
*
* @param[in,out] index lvalue that is used as a bit index in the loop
* (can be declared as any NvU* or NvS* variable)
* @param[in] mask expression, loop will iterate over set bits only
*/
#define FOR_EACH_INDEX_IN_MASK(maskWidth,index,mask) \
{ \
NvU##maskWidth lclMsk = (NvU##maskWidth)(mask); \
for ((index) = 0U; lclMsk != 0U; (index)++, lclMsk >>= 1U)\
{ \
if (((NvU##maskWidth)NVBIT64(0) & lclMsk) == 0U) \
{ \
continue; \
}
#define FOR_EACH_INDEX_IN_MASK_END \
} \
}
//
// Size to use when declaring variable-sized arrays
//
#define NV_ANYSIZE_ARRAY 1
//
// Returns ceil(a/b)
//
#define NV_CEIL(a,b) (((a)+(b)-1)/(b))
// Clearer name for NV_CEIL
#ifndef NV_DIV_AND_CEIL
#define NV_DIV_AND_CEIL(a, b) NV_CEIL(a,b)
#endif
#ifndef NV_MIN
#define NV_MIN(a, b) (((a) < (b)) ? (a) : (b))
#endif
#ifndef NV_MAX
#define NV_MAX(a, b) (((a) > (b)) ? (a) : (b))
#endif
//
// Returns absolute value of provided integer expression
//
#define NV_ABS(a) ((a)>=0?(a):(-(a)))
//
// Returns 1 if input number is positive, 0 if 0 and -1 if negative. Avoid
// macro parameter as function call which will have side effects.
//
#define NV_SIGN(s) ((NvS8)(((s) > 0) - ((s) < 0)))
//
// Returns 1 if input number is >= 0 or -1 otherwise. This assumes 0 has a
// positive sign.
//
#define NV_ZERO_SIGN(s) ((NvS8)((((s) >= 0) * 2) - 1))
// Returns the offset (in bytes) of 'member' in struct 'type'.
#ifndef NV_OFFSETOF
#if defined(__GNUC__) && (__GNUC__ > 3)
#define NV_OFFSETOF(type, member) ((NvU32)__builtin_offsetof(type, member))
#else
#define NV_OFFSETOF(type, member) ((NvU32)(NvU64)&(((type *)0)->member)) // shouldn't we use PtrToUlong? But will need to include windows header.
#endif
#endif
//
// Performs a rounded division of b into a (unsigned). For SIGNED version of
// NV_ROUNDED_DIV() macro check the comments in bug 769777.
//
#define NV_UNSIGNED_ROUNDED_DIV(a,b) (((a) + ((b) / 2U)) / (b))
/*!
* Performs a ceiling division of b into a (unsigned). A "ceiling" division is
* a division is one with rounds up result up if a % b != 0.
*
* @param[in] a Numerator
* @param[in] b Denominator
*
* @return a / b + a % b != 0 ? 1 : 0.
*/
#define NV_UNSIGNED_DIV_CEIL(a, b) (((a) + (b - 1)) / (b))
/*!
* Performs subtraction where a negative difference is raised to zero.
* Can be used to avoid underflowing an unsigned subtraction.
*
* @param[in] a Minuend
* @param[in] b Subtrahend
*
* @return a > b ? a - b : 0.
*/
#define NV_SUBTRACT_NO_UNDERFLOW(a, b) ((a)>(b) ? (a)-(b) : 0)
/*!
* Performs a rounded right-shift of 32-bit unsigned value "a" by "shift" bits.
* Will round result away from zero.
*
* @param[in] a 32-bit unsigned value to shift.
* @param[in] shift Number of bits by which to shift.
*
* @return Resulting shifted value rounded away from zero.
*/
#define NV_RIGHT_SHIFT_ROUNDED(a, shift) \
(((a) >> (shift)) + !!((NVBIT((shift) - 1) & (a)) == NVBIT((shift) - 1)))
//
// Power of 2 alignment.
// (Will give unexpected results if 'gran' is not a power of 2.)
//
#ifndef NV_ALIGN_DOWN
//
// Notably using v - v + gran ensures gran gets promoted to the same type as v if gran has a smaller type.
// Otherwise, if aligning a NVU64 with NVU32 granularity, the top 4 bytes get zeroed.
//
#define NV_ALIGN_DOWN(v, gran) ((v) & ~((v) - (v) + (gran) - 1))
#endif
#ifndef NV_ALIGN_UP
//
// Notably using v - v + gran ensures gran gets promoted to the same type as v if gran has a smaller type.
// Otherwise, if aligning a NVU64 with NVU32 granularity, the top 4 bytes get zeroed.
//
#define NV_ALIGN_UP(v, gran) (((v) + ((gran) - 1)) & ~((v) - (v) + (gran) - 1))
#endif
#ifndef NV_ALIGN_DOWN64
#define NV_ALIGN_DOWN64(v, gran) ((v) & ~(((NvU64)gran) - 1))
#endif
#ifndef NV_ALIGN_UP64
#define NV_ALIGN_UP64(v, gran) (((v) + ((gran) - 1)) & ~(((NvU64)gran)-1))
#endif
#ifndef NV_IS_ALIGNED
#define NV_IS_ALIGNED(v, gran) (0U == ((v) & ((gran) - 1U)))
#endif
#ifndef NV_IS_ALIGNED64
#define NV_IS_ALIGNED64(v, gran) (0U == ((v) & (((NvU64)gran) - 1U)))
#endif
#ifndef NVMISC_MEMSET
static NV_FORCEINLINE void *NVMISC_MEMSET(void *s, NvU8 c, NvLength n)
{
NvU8 *b = (NvU8 *) s;
NvLength i;
for (i = 0; i < n; i++)
{
b[i] = c;
}
return s;
}
#endif
#ifndef NVMISC_MEMCPY
static NV_FORCEINLINE void *NVMISC_MEMCPY(void *dest, const void *src, NvLength n)
{
NvU8 *destByte = (NvU8 *) dest;
const NvU8 *srcByte = (const NvU8 *) src;
NvLength i;
for (i = 0; i < n; i++)
{
destByte[i] = srcByte[i];
}
return dest;
}
#endif
static NV_FORCEINLINE char *NVMISC_STRNCPY(char *dest, const char *src, NvLength n)
{
NvLength i;
for (i = 0; i < n; i++)
{
dest[i] = src[i];
if (src[i] == '\0')
{
break;
}
}
for (; i < n; i++)
{
dest[i] = '\0';
}
return dest;
}
/*!
* Convert a void* to an NvUPtr. This is used when MISRA forbids us from doing a direct cast.
*
* @param[in] ptr Pointer to be converted
*
* @return Resulting NvUPtr
*/
static NV_FORCEINLINE NvUPtr NV_PTR_TO_NVUPTR(void *ptr)
{
union
{
NvUPtr v;
void *p;
} uAddr;
uAddr.p = ptr;
return uAddr.v;
}
/*!
* Convert an NvUPtr to a void*. This is used when MISRA forbids us from doing a direct cast.
*
* @param[in] ptr Pointer to be converted
*
* @return Resulting void *
*/
static NV_FORCEINLINE void *NV_NVUPTR_TO_PTR(NvUPtr address)
{
union
{
NvUPtr v;
void *p;
} uAddr;
uAddr.v = address;
return uAddr.p;
}
#ifdef __cplusplus
}
#endif //__cplusplus
#endif // __NV_MISC_H

View File

@@ -0,0 +1,130 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2014-2019 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef SDK_NVSTATUS_H
#define SDK_NVSTATUS_H
/* XAPIGEN - this file is not suitable for (nor needed by) xapigen. */
/* Rather than #ifdef out every such include in every sdk */
/* file, punt here. */
#if !defined(XAPIGEN) /* rest of file */
#ifdef __cplusplus
extern "C" {
#endif
#include "nvtypes.h"
typedef NvU32 NV_STATUS;
#define NV_STATUS_CODE( name, code, string ) name = (code),
enum
{
#include "nvstatuscodes.h"
};
#undef NV_STATUS_CODE
/*!
* @def NV_STATUS_LEVEL_OK
* @see NV_STATUS_LEVEL
* @brief Success: No error or special condition
*/
#define NV_STATUS_LEVEL_OK 0
/*!
* @def NV_STATUS_LEVEL_WARN
* @see NV_STATUS_LEVEL
* @brief Success, but there is an special condition
*
* @details In general, NV_STATUS_LEVEL_WARN status codes are handled the
* same as NV_STATUS_LEVEL_OK, but are usefil to indicate that
* there is a condition that may be specially handled.
*
* Therefore, in most cases, client function should test for
* status <= NV_STATUS_LEVEL_WARN or status > NV_STATUS_LEVEL_WARN
* to determine success v. failure of a call.
*/
#define NV_STATUS_LEVEL_WARN 1
/*!
* @def NV_STATUS_LEVEL_ERR
* @see NV_STATUS_LEVEL
* @brief Unrecoverable error condition
*/
#define NV_STATUS_LEVEL_ERR 3
/*!
* @def NV_STATUS_LEVEL
* @see NV_STATUS_LEVEL_OK
* @see NV_STATUS_LEVEL_WARN
* @see NV_STATUS_LEVEL_ERR
* @brief Level of the status code
*
* @warning IMPORTANT: When comparing NV_STATUS_LEVEL(_S) against one of
* these constants, it is important to use '<=' or '>' (rather
* than '<' or '>=').
*
* For example. do:
* if (NV_STATUS_LEVEL(status) <= NV_STATUS_LEVEL_WARN)
* rather than:
* if (NV_STATUS_LEVEL(status) < NV_STATUS_LEVEL_ERR)
*
* By being consistent in this manner, it is easier to systematically
* add additional level constants. New levels are likely to lower
* (rather than raise) the severity of _ERR codes. For example,
* if we were to add NV_STATUS_LEVEL_RETRY to indicate hardware
* failures that may be recoverable (e.g. RM_ERR_TIMEOUT_RETRY
* or RM_ERR_BUSY_RETRY), it would be less severe than
* NV_STATUS_LEVEL_ERR the level to which these status codes now
* belong. Using '<=' and '>' ensures your code is not broken in
* cases like this.
*/
#define NV_STATUS_LEVEL(_S) \
((_S) == NV_OK? NV_STATUS_LEVEL_OK: \
((_S) != NV_ERR_GENERIC && (_S) & 0x00010000? NV_STATUS_LEVEL_WARN: \
NV_STATUS_LEVEL_ERR))
/*!
* @def NV_STATUS_LEVEL
* @see NV_STATUS_LEVEL_OK
* @see NV_STATUS_LEVEL_WARN
* @see NV_STATUS_LEVEL_ERR
* @brief Character representing status code level
*/
#define NV_STATUS_LEVEL_CHAR(_S) \
((_S) == NV_OK? '0': \
((_S) != NV_ERR_GENERIC && (_S) & 0x00010000? 'W': \
'E'))
// Function definitions
const char *nvstatusToString(NV_STATUS nvStatusIn);
#ifdef __cplusplus
}
#endif
#endif // XAPIGEN
#endif /* SDK_NVSTATUS_H */

View File

@@ -0,0 +1,169 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2014-2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef SDK_NVSTATUSCODES_H
#define SDK_NVSTATUSCODES_H
/* XAPIGEN - this file is not suitable for (nor needed by) xapigen. */
/* Rather than #ifdef out every such include in every sdk */
/* file, punt here. */
#if !defined(XAPIGEN) /* rest of file */
NV_STATUS_CODE(NV_OK, 0x00000000, "Success")
NV_STATUS_CODE(NV_ERR_GENERIC, 0x0000FFFF, "Failure: Generic Error")
NV_STATUS_CODE(NV_ERR_BROKEN_FB, 0x00000001, "Frame-Buffer broken")
NV_STATUS_CODE(NV_ERR_BUFFER_TOO_SMALL, 0x00000002, "Buffer passed in is too small")
NV_STATUS_CODE(NV_ERR_BUSY_RETRY, 0x00000003, "System is busy, retry later")
NV_STATUS_CODE(NV_ERR_CALLBACK_NOT_SCHEDULED, 0x00000004, "The requested callback API not scheduled")
NV_STATUS_CODE(NV_ERR_CARD_NOT_PRESENT, 0x00000005, "Card not detected")
NV_STATUS_CODE(NV_ERR_CYCLE_DETECTED, 0x00000006, "Call cycle detected")
NV_STATUS_CODE(NV_ERR_DMA_IN_USE, 0x00000007, "Requested DMA is in use")
NV_STATUS_CODE(NV_ERR_DMA_MEM_NOT_LOCKED, 0x00000008, "Requested DMA memory is not locked")
NV_STATUS_CODE(NV_ERR_DMA_MEM_NOT_UNLOCKED, 0x00000009, "Requested DMA memory is not unlocked")
NV_STATUS_CODE(NV_ERR_DUAL_LINK_INUSE, 0x0000000A, "Dual-Link is in use")
NV_STATUS_CODE(NV_ERR_ECC_ERROR, 0x0000000B, "Generic ECC error")
NV_STATUS_CODE(NV_ERR_FIFO_BAD_ACCESS, 0x0000000C, "FIFO: Invalid access")
NV_STATUS_CODE(NV_ERR_FREQ_NOT_SUPPORTED, 0x0000000D, "Requested frequency is not supported")
NV_STATUS_CODE(NV_ERR_GPU_DMA_NOT_INITIALIZED, 0x0000000E, "Requested DMA not initialized")
NV_STATUS_CODE(NV_ERR_GPU_IS_LOST, 0x0000000F, "GPU lost from the bus")
NV_STATUS_CODE(NV_ERR_GPU_IN_FULLCHIP_RESET, 0x00000010, "GPU currently in full-chip reset")
NV_STATUS_CODE(NV_ERR_GPU_NOT_FULL_POWER, 0x00000011, "GPU not in full power")
NV_STATUS_CODE(NV_ERR_GPU_UUID_NOT_FOUND, 0x00000012, "GPU UUID not found")
NV_STATUS_CODE(NV_ERR_HOT_SWITCH, 0x00000013, "System in hot switch")
NV_STATUS_CODE(NV_ERR_I2C_ERROR, 0x00000014, "I2C Error")
NV_STATUS_CODE(NV_ERR_I2C_SPEED_TOO_HIGH, 0x00000015, "I2C Error: Speed too high")
NV_STATUS_CODE(NV_ERR_ILLEGAL_ACTION, 0x00000016, "Current action is not allowed")
NV_STATUS_CODE(NV_ERR_IN_USE, 0x00000017, "Generic busy error")
NV_STATUS_CODE(NV_ERR_INFLATE_COMPRESSED_DATA_FAILED, 0x00000018, "Failed to inflate compressed data")
NV_STATUS_CODE(NV_ERR_INSERT_DUPLICATE_NAME, 0x00000019, "Found a duplicate entry in the requested btree")
NV_STATUS_CODE(NV_ERR_INSUFFICIENT_RESOURCES, 0x0000001A, "Ran out of a critical resource, other than memory")
NV_STATUS_CODE(NV_ERR_INSUFFICIENT_PERMISSIONS, 0x0000001B, "The requester does not have sufficient permissions")
NV_STATUS_CODE(NV_ERR_INSUFFICIENT_POWER, 0x0000001C, "Generic Error: Low power")
NV_STATUS_CODE(NV_ERR_INVALID_ACCESS_TYPE, 0x0000001D, "This type of access is not allowed")
NV_STATUS_CODE(NV_ERR_INVALID_ADDRESS, 0x0000001E, "Address not valid")
NV_STATUS_CODE(NV_ERR_INVALID_ARGUMENT, 0x0000001F, "Invalid argument to call")
NV_STATUS_CODE(NV_ERR_INVALID_BASE, 0x00000020, "Invalid base")
NV_STATUS_CODE(NV_ERR_INVALID_CHANNEL, 0x00000021, "Given channel-id not valid")
NV_STATUS_CODE(NV_ERR_INVALID_CLASS, 0x00000022, "Given class-id not valid")
NV_STATUS_CODE(NV_ERR_INVALID_CLIENT, 0x00000023, "Given client not valid")
NV_STATUS_CODE(NV_ERR_INVALID_COMMAND, 0x00000024, "Command passed is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_DATA, 0x00000025, "Invalid data passed")
NV_STATUS_CODE(NV_ERR_INVALID_DEVICE, 0x00000026, "Current device is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_DMA_SPECIFIER, 0x00000027, "The requested DMA specifier is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_EVENT, 0x00000028, "Invalid event occurred")
NV_STATUS_CODE(NV_ERR_INVALID_FLAGS, 0x00000029, "Invalid flags passed")
NV_STATUS_CODE(NV_ERR_INVALID_FUNCTION, 0x0000002A, "Called function is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_HEAP, 0x0000002B, "Heap corrupted")
NV_STATUS_CODE(NV_ERR_INVALID_INDEX, 0x0000002C, "Index invalid")
NV_STATUS_CODE(NV_ERR_INVALID_IRQ_LEVEL, 0x0000002D, "Requested IRQ level is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_LIMIT, 0x0000002E, "Generic Error: Invalid limit")
NV_STATUS_CODE(NV_ERR_INVALID_LOCK_STATE, 0x0000002F, "Requested lock state not valid")
NV_STATUS_CODE(NV_ERR_INVALID_METHOD, 0x00000030, "Requested method not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT, 0x00000031, "Object not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT_BUFFER, 0x00000032, "Object buffer passed is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT_HANDLE, 0x00000033, "Object handle is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT_NEW, 0x00000034, "New object is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT_OLD, 0x00000035, "Old object is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OBJECT_PARENT, 0x00000036, "Object parent is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OFFSET, 0x00000037, "The offset passed is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OPERATION, 0x00000038, "Requested operation is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_OWNER, 0x00000039, "Owner not valid")
NV_STATUS_CODE(NV_ERR_INVALID_PARAM_STRUCT, 0x0000003A, "Invalid structure parameter")
NV_STATUS_CODE(NV_ERR_INVALID_PARAMETER, 0x0000003B, "At least one of the parameters passed is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_PATH, 0x0000003C, "The requested path is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_POINTER, 0x0000003D, "Pointer not valid")
NV_STATUS_CODE(NV_ERR_INVALID_REGISTRY_KEY, 0x0000003E, "Found an invalid registry key")
NV_STATUS_CODE(NV_ERR_INVALID_REQUEST, 0x0000003F, "Generic Error: Invalid request")
NV_STATUS_CODE(NV_ERR_INVALID_STATE, 0x00000040, "Generic Error: Invalid state")
NV_STATUS_CODE(NV_ERR_INVALID_STRING_LENGTH, 0x00000041, "The string length is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_READ, 0x00000042, "The requested read operation is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_WRITE, 0x00000043, "The requested write operation is not valid")
NV_STATUS_CODE(NV_ERR_INVALID_XLATE, 0x00000044, "The requested translate operation is not valid")
NV_STATUS_CODE(NV_ERR_IRQ_NOT_FIRING, 0x00000045, "Requested IRQ is not firing")
NV_STATUS_CODE(NV_ERR_IRQ_EDGE_TRIGGERED, 0x00000046, "IRQ is edge triggered")
NV_STATUS_CODE(NV_ERR_MEMORY_TRAINING_FAILED, 0x00000047, "Failed memory training sequence")
NV_STATUS_CODE(NV_ERR_MISMATCHED_SLAVE, 0x00000048, "Slave mismatch")
NV_STATUS_CODE(NV_ERR_MISMATCHED_TARGET, 0x00000049, "Target mismatch")
NV_STATUS_CODE(NV_ERR_MISSING_TABLE_ENTRY, 0x0000004A, "Requested entry missing not found in the table")
NV_STATUS_CODE(NV_ERR_MODULE_LOAD_FAILED, 0x0000004B, "Failed to load the requested module")
NV_STATUS_CODE(NV_ERR_MORE_DATA_AVAILABLE, 0x0000004C, "There is more data available")
NV_STATUS_CODE(NV_ERR_MORE_PROCESSING_REQUIRED, 0x0000004D, "More processing required for the given call")
NV_STATUS_CODE(NV_ERR_MULTIPLE_MEMORY_TYPES, 0x0000004E, "Multiple memory types found")
NV_STATUS_CODE(NV_ERR_NO_FREE_FIFOS, 0x0000004F, "No more free FIFOs found")
NV_STATUS_CODE(NV_ERR_NO_INTR_PENDING, 0x00000050, "No interrupt pending")
NV_STATUS_CODE(NV_ERR_NO_MEMORY, 0x00000051, "Out of memory")
NV_STATUS_CODE(NV_ERR_NO_SUCH_DOMAIN, 0x00000052, "Requested domain does not exist")
NV_STATUS_CODE(NV_ERR_NO_VALID_PATH, 0x00000053, "Caller did not specify a valid path")
NV_STATUS_CODE(NV_ERR_NOT_COMPATIBLE, 0x00000054, "Generic Error: Incompatible types")
NV_STATUS_CODE(NV_ERR_NOT_READY, 0x00000055, "Generic Error: Not ready")
NV_STATUS_CODE(NV_ERR_NOT_SUPPORTED, 0x00000056, "Call not supported")
NV_STATUS_CODE(NV_ERR_OBJECT_NOT_FOUND, 0x00000057, "Requested object not found")
NV_STATUS_CODE(NV_ERR_OBJECT_TYPE_MISMATCH, 0x00000058, "Specified objects do not match")
NV_STATUS_CODE(NV_ERR_OPERATING_SYSTEM, 0x00000059, "Generic operating system error")
NV_STATUS_CODE(NV_ERR_OTHER_DEVICE_FOUND, 0x0000005A, "Found other device instead of the requested one")
NV_STATUS_CODE(NV_ERR_OUT_OF_RANGE, 0x0000005B, "The specified value is out of bounds")
NV_STATUS_CODE(NV_ERR_OVERLAPPING_UVM_COMMIT, 0x0000005C, "Overlapping unified virtual memory commit")
NV_STATUS_CODE(NV_ERR_PAGE_TABLE_NOT_AVAIL, 0x0000005D, "Requested page table not available")
NV_STATUS_CODE(NV_ERR_PID_NOT_FOUND, 0x0000005E, "Process-Id not found")
NV_STATUS_CODE(NV_ERR_PROTECTION_FAULT, 0x0000005F, "Protection fault")
NV_STATUS_CODE(NV_ERR_RC_ERROR, 0x00000060, "Generic RC error")
NV_STATUS_CODE(NV_ERR_REJECTED_VBIOS, 0x00000061, "Given Video BIOS rejected/invalid")
NV_STATUS_CODE(NV_ERR_RESET_REQUIRED, 0x00000062, "Reset required")
NV_STATUS_CODE(NV_ERR_STATE_IN_USE, 0x00000063, "State in use")
NV_STATUS_CODE(NV_ERR_SIGNAL_PENDING, 0x00000064, "Signal pending")
NV_STATUS_CODE(NV_ERR_TIMEOUT, 0x00000065, "Call timed out")
NV_STATUS_CODE(NV_ERR_TIMEOUT_RETRY, 0x00000066, "Call timed out, please retry later")
NV_STATUS_CODE(NV_ERR_TOO_MANY_PRIMARIES, 0x00000067, "Too many primaries")
NV_STATUS_CODE(NV_ERR_UVM_ADDRESS_IN_USE, 0x00000068, "Unified virtual memory requested address already in use")
NV_STATUS_CODE(NV_ERR_MAX_SESSION_LIMIT_REACHED, 0x00000069, "Maximum number of sessions reached")
NV_STATUS_CODE(NV_ERR_LIB_RM_VERSION_MISMATCH, 0x0000006A, "Library version doesn't match driver version") //Contained within the RMAPI library
NV_STATUS_CODE(NV_ERR_PRIV_SEC_VIOLATION, 0x0000006B, "Priv security violation")
NV_STATUS_CODE(NV_ERR_GPU_IN_DEBUG_MODE, 0x0000006C, "GPU currently in debug mode")
NV_STATUS_CODE(NV_ERR_FEATURE_NOT_ENABLED, 0x0000006D, "Requested Feature functionality is not enabled")
NV_STATUS_CODE(NV_ERR_RESOURCE_LOST, 0x0000006E, "Requested resource has been destroyed")
NV_STATUS_CODE(NV_ERR_PMU_NOT_READY, 0x0000006F, "PMU is not ready or has not yet been initialized")
NV_STATUS_CODE(NV_ERR_FLCN_ERROR, 0x00000070, "Generic falcon assert or halt")
NV_STATUS_CODE(NV_ERR_FATAL_ERROR, 0x00000071, "Fatal/unrecoverable error")
NV_STATUS_CODE(NV_ERR_MEMORY_ERROR, 0x00000072, "Generic memory error")
NV_STATUS_CODE(NV_ERR_INVALID_LICENSE, 0x00000073, "License provided is rejected or invalid")
NV_STATUS_CODE(NV_ERR_NVLINK_INIT_ERROR, 0x00000074, "Nvlink Init Error")
NV_STATUS_CODE(NV_ERR_NVLINK_MINION_ERROR, 0x00000075, "Nvlink Minion Error")
NV_STATUS_CODE(NV_ERR_NVLINK_CLOCK_ERROR, 0x00000076, "Nvlink Clock Error")
NV_STATUS_CODE(NV_ERR_NVLINK_TRAINING_ERROR, 0x00000077, "Nvlink Training Error")
NV_STATUS_CODE(NV_ERR_NVLINK_CONFIGURATION_ERROR, 0x00000078, "Nvlink Configuration Error")
NV_STATUS_CODE(NV_ERR_RISCV_ERROR, 0x00000079, "Generic RISC-V assert or halt")
// Warnings:
NV_STATUS_CODE(NV_WARN_HOT_SWITCH, 0x00010001, "WARNING Hot switch")
NV_STATUS_CODE(NV_WARN_INCORRECT_PERFMON_DATA, 0x00010002, "WARNING Incorrect performance monitor data")
NV_STATUS_CODE(NV_WARN_MISMATCHED_SLAVE, 0x00010003, "WARNING Slave mismatch")
NV_STATUS_CODE(NV_WARN_MISMATCHED_TARGET, 0x00010004, "WARNING Target mismatch")
NV_STATUS_CODE(NV_WARN_MORE_PROCESSING_REQUIRED, 0x00010005, "WARNING More processing required for the call")
NV_STATUS_CODE(NV_WARN_NOTHING_TO_DO, 0x00010006, "WARNING Nothing to do")
NV_STATUS_CODE(NV_WARN_NULL_OBJECT, 0x00010007, "WARNING NULL object found")
NV_STATUS_CODE(NV_WARN_OUT_OF_RANGE, 0x00010008, "WARNING value out of range")
#endif // XAPIGEN
#endif /* SDK_NVSTATUSCODES_H */

View File

@@ -0,0 +1,662 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1993-2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NVTYPES_INCLUDED
#define NVTYPES_INCLUDED
#ifdef __cplusplus
extern "C" {
#endif
#include "cpuopsys.h"
#ifndef NVTYPES_USE_STDINT
#define NVTYPES_USE_STDINT 0
#endif
#if NVTYPES_USE_STDINT
#ifdef __cplusplus
#include <cstdint>
#include <cinttypes>
#else
#include <stdint.h>
#include <inttypes.h>
#endif // __cplusplus
#endif // NVTYPES_USE_STDINT
#ifndef __cplusplus
// Header includes to make sure wchar_t is defined for C-file compilation
// (C++ is not affected as it is a fundamental type there)
// _MSC_VER is a hack to avoid failures for old setup of UEFI builds which are
// currently set to msvc100 but do not properly set the include paths
#if defined(NV_WINDOWS) && (!defined(_MSC_VER) || (_MSC_VER > 1600))
#include <stddef.h>
#define NV_HAS_WCHAR_T_TYPEDEF 1
#endif
#endif // __cplusplus
#if defined(MAKE_NV64TYPES_8BYTES_ALIGNED) && defined(__i386__)
// ensure or force 8-bytes alignment of NV 64-bit types
#define OPTIONAL_ALIGN8_ATTR __attribute__((aligned(8)))
#else
// nothing needed
#define OPTIONAL_ALIGN8_ATTR
#endif // MAKE_NV64TYPES_8BYTES_ALIGNED && i386
/***************************************************************************\
|* Typedefs *|
\***************************************************************************/
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
//Typedefs for MISRA COMPLIANCE
typedef unsigned long long UInt64;
typedef signed long long Int64;
typedef unsigned int UInt32;
typedef signed int Int32;
typedef unsigned short UInt16;
typedef signed short Int16;
typedef unsigned char UInt8 ;
typedef signed char Int8 ;
typedef void Void;
typedef float float32_t;
typedef double float64_t;
#endif
// Floating point types
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef float32_t NvF32; /* IEEE Single Precision (S1E8M23) */
typedef float64_t NvF64 OPTIONAL_ALIGN8_ATTR; /* IEEE Double Precision (S1E11M52) */
#else
typedef float NvF32; /* IEEE Single Precision (S1E8M23) */
typedef double NvF64 OPTIONAL_ALIGN8_ATTR; /* IEEE Double Precision (S1E11M52) */
#endif
// 8-bit: 'char' is the only 8-bit in the C89 standard and after.
#if NVTYPES_USE_STDINT
typedef uint8_t NvV8; /* "void": enumerated or multiple fields */
typedef uint8_t NvU8; /* 0 to 255 */
typedef int8_t NvS8; /* -128 to 127 */
#else
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef UInt8 NvV8; /* "void": enumerated or multiple fields */
typedef UInt8 NvU8; /* 0 to 255 */
typedef Int8 NvS8; /* -128 to 127 */
#else
typedef unsigned char NvV8; /* "void": enumerated or multiple fields */
typedef unsigned char NvU8; /* 0 to 255 */
typedef signed char NvS8; /* -128 to 127 */
#endif
#endif // NVTYPES_USE_STDINT
#if NVTYPES_USE_STDINT
typedef uint16_t NvV16; /* "void": enumerated or multiple fields */
typedef uint16_t NvU16; /* 0 to 65535 */
typedef int16_t NvS16; /* -32768 to 32767 */
#else
// 16-bit: If the compiler tells us what we can use, then use it.
#ifdef __INT16_TYPE__
typedef unsigned __INT16_TYPE__ NvV16; /* "void": enumerated or multiple fields */
typedef unsigned __INT16_TYPE__ NvU16; /* 0 to 65535 */
typedef signed __INT16_TYPE__ NvS16; /* -32768 to 32767 */
// The minimal standard for C89 and after
#else // __INT16_TYPE__
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef UInt16 NvV16; /* "void": enumerated or multiple fields */
typedef UInt16 NvU16; /* 0 to 65535 */
typedef Int16 NvS16; /* -32768 to 32767 */
#else
typedef unsigned short NvV16; /* "void": enumerated or multiple fields */
typedef unsigned short NvU16; /* 0 to 65535 */
typedef signed short NvS16; /* -32768 to 32767 */
#endif
#endif // __INT16_TYPE__
#endif // NVTYPES_USE_STDINT
// wchar type (fixed size types consistent across Linux/Windows boundaries)
#if defined(NV_HAS_WCHAR_T_TYPEDEF)
typedef wchar_t NvWchar;
#else
typedef NvV16 NvWchar;
#endif
// Macro to build an NvU32 from four bytes, listed from msb to lsb
#define NvU32_BUILD(a, b, c, d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
#if NVTYPES_USE_STDINT
typedef uint32_t NvV32; /* "void": enumerated or multiple fields */
typedef uint32_t NvU32; /* 0 to 4294967295 */
typedef int32_t NvS32; /* -2147483648 to 2147483647 */
#else
// 32-bit: If the compiler tells us what we can use, then use it.
#ifdef __INT32_TYPE__
typedef unsigned __INT32_TYPE__ NvV32; /* "void": enumerated or multiple fields */
typedef unsigned __INT32_TYPE__ NvU32; /* 0 to 4294967295 */
typedef signed __INT32_TYPE__ NvS32; /* -2147483648 to 2147483647 */
// Older compilers
#else // __INT32_TYPE__
// For historical reasons, NvU32/NvV32 are defined to different base intrinsic
// types than NvS32 on some platforms.
// Mainly for 64-bit linux, where long is 64 bits and win9x, where int is 16 bit.
#if (defined(NV_UNIX) || defined(vxworks) || defined(NV_WINDOWS_CE) || \
defined(__arm) || defined(__IAR_SYSTEMS_ICC__) || defined(NV_QNX) || \
defined(NV_INTEGRITY) || defined(NV_MODS) || \
defined(__GNUC__) || defined(__clang__) || defined(NV_MACINTOSH_64)) && \
(!defined(NV_MACINTOSH) || defined(NV_MACINTOSH_64))
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef UInt32 NvV32; /* "void": enumerated or multiple fields */
typedef UInt32 NvU32; /* 0 to 4294967295 */
#else
typedef unsigned int NvV32; /* "void": enumerated or multiple fields */
typedef unsigned int NvU32; /* 0 to 4294967295 */
#endif
// The minimal standard for C89 and after
#else // (defined(NV_UNIX) || defined(vxworks) || ...
typedef unsigned long NvV32; /* "void": enumerated or multiple fields */
typedef unsigned long NvU32; /* 0 to 4294967295 */
#endif // (defined(NV_UNIX) || defined(vxworks) || ...
// Mac OS 32-bit still needs this
#if defined(NV_MACINTOSH) && !defined(NV_MACINTOSH_64)
typedef signed long NvS32; /* -2147483648 to 2147483647 */
#else
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef Int32 NvS32; /* -2147483648 to 2147483647 */
#else
typedef signed int NvS32; /* -2147483648 to 2147483647 */
#endif
#endif // defined(NV_MACINTOSH) && !defined(NV_MACINTOSH_64)
#endif // __INT32_TYPE__
#endif // NVTYPES_USE_STDINT
#if NVTYPES_USE_STDINT
typedef uint64_t NvU64 OPTIONAL_ALIGN8_ATTR; /* 0 to 18446744073709551615 */
typedef int64_t NvS64 OPTIONAL_ALIGN8_ATTR; /* -9223372036854775808 to 9223372036854775807 */
#define NvU64_fmtX PRIX64
#define NvU64_fmtx PRIx64
#define NvU64_fmtu PRIu64
#define NvU64_fmto PRIo64
#define NvS64_fmtd PRId64
#define NvS64_fmti PRIi64
#else
// 64-bit types for compilers that support them, plus some obsolete variants
#if defined(__GNUC__) || defined(__clang__) || defined(__arm) || \
defined(__IAR_SYSTEMS_ICC__) || defined(__ghs__) || defined(_WIN64) || \
defined(__SUNPRO_C) || defined(__SUNPRO_CC) || defined (__xlC__)
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
typedef UInt64 NvU64 OPTIONAL_ALIGN8_ATTR; /* 0 to 18446744073709551615 */
typedef Int64 NvS64 OPTIONAL_ALIGN8_ATTR; /* -9223372036854775808 to 9223372036854775807 */
#else
typedef unsigned long long NvU64 OPTIONAL_ALIGN8_ATTR; /* 0 to 18446744073709551615 */
typedef long long NvS64 OPTIONAL_ALIGN8_ATTR; /* -9223372036854775808 to 9223372036854775807 */
#endif
#define NvU64_fmtX "llX"
#define NvU64_fmtx "llx"
#define NvU64_fmtu "llu"
#define NvU64_fmto "llo"
#define NvS64_fmtd "lld"
#define NvS64_fmti "lli"
// Microsoft since 2003 -- https://msdn.microsoft.com/en-us/library/29dh1w7z.aspx
#else
typedef unsigned __int64 NvU64 OPTIONAL_ALIGN8_ATTR; /* 0 to 18446744073709551615 */
typedef __int64 NvS64 OPTIONAL_ALIGN8_ATTR; /* -9223372036854775808 to 9223372036854775807 */
#define NvU64_fmtX "I64X"
#define NvU64_fmtx "I64x"
#define NvU64_fmtu "I64u"
#define NvU64_fmto "I64o"
#define NvS64_fmtd "I64d"
#define NvS64_fmti "I64i"
#endif
#endif // NVTYPES_USE_STDINT
#ifdef NV_TYPESAFE_HANDLES
/*
* Can't use opaque pointer as clients might be compiled with mismatched
* pointer sizes. TYPESAFE check will eventually be removed once all clients
* have transistioned safely to NvHandle.
* The plan is to then eventually scale up the handle to be 64-bits.
*/
typedef struct
{
NvU32 val;
} NvHandle;
#else
/*
* For compatibility with modules that haven't moved typesafe handles.
*/
typedef NvU32 NvHandle;
#endif // NV_TYPESAFE_HANDLES
/* Boolean type */
typedef NvU8 NvBool;
#define NV_TRUE ((NvBool)(0 == 0))
#define NV_FALSE ((NvBool)(0 != 0))
/* Tristate type: NV_TRISTATE_FALSE, NV_TRISTATE_TRUE, NV_TRISTATE_INDETERMINATE */
typedef NvU8 NvTristate;
#define NV_TRISTATE_FALSE ((NvTristate) 0)
#define NV_TRISTATE_TRUE ((NvTristate) 1)
#define NV_TRISTATE_INDETERMINATE ((NvTristate) 2)
/* Macros to extract the low and high parts of a 64-bit unsigned integer */
/* Also designed to work if someone happens to pass in a 32-bit integer */
#ifdef NV_MISRA_COMPLIANCE_REQUIRED
#define NvU64_HI32(n) ((NvU32)((((NvU64)(n)) >> 32) & 0xffffffffU))
#define NvU64_LO32(n) ((NvU32)(( (NvU64)(n)) & 0xffffffffU))
#else
#define NvU64_HI32(n) ((NvU32)((((NvU64)(n)) >> 32) & 0xffffffff))
#define NvU64_LO32(n) ((NvU32)(( (NvU64)(n)) & 0xffffffff))
#endif
#define NvU40_HI32(n) ((NvU32)((((NvU64)(n)) >> 8) & 0xffffffffU))
#define NvU40_HI24of32(n) ((NvU32)( (NvU64)(n) & 0xffffff00U))
/* Macros to get the MSB and LSB of a 32 bit unsigned number */
#define NvU32_HI16(n) ((NvU16)((((NvU32)(n)) >> 16) & 0xffffU))
#define NvU32_LO16(n) ((NvU16)(( (NvU32)(n)) & 0xffffU))
/***************************************************************************\
|* *|
|* 64 bit type definitions for use in interface structures. *|
|* *|
\***************************************************************************/
#if defined(NV_64_BITS)
typedef void* NvP64; /* 64 bit void pointer */
typedef NvU64 NvUPtr; /* pointer sized unsigned int */
typedef NvS64 NvSPtr; /* pointer sized signed int */
typedef NvU64 NvLength; /* length to agree with sizeof */
#define NvP64_VALUE(n) (n)
#define NvP64_fmt "%p"
#define KERNEL_POINTER_FROM_NvP64(p,v) ((p)(v))
#define NvP64_PLUS_OFFSET(p,o) (NvP64)((NvU64)(p) + (NvU64)(o))
#define NvUPtr_fmtX NvU64_fmtX
#define NvUPtr_fmtx NvU64_fmtx
#define NvUPtr_fmtu NvU64_fmtu
#define NvUPtr_fmto NvU64_fmto
#define NvSPtr_fmtd NvS64_fmtd
#define NvSPtr_fmti NvS64_fmti
#else
typedef NvU64 NvP64; /* 64 bit void pointer */
typedef NvU32 NvUPtr; /* pointer sized unsigned int */
typedef NvS32 NvSPtr; /* pointer sized signed int */
typedef NvU32 NvLength; /* length to agree with sizeof */
#define NvP64_VALUE(n) ((void *)(NvUPtr)(n))
#define NvP64_fmt "0x%llx"
#define KERNEL_POINTER_FROM_NvP64(p,v) ((p)(NvUPtr)(v))
#define NvP64_PLUS_OFFSET(p,o) ((p) + (NvU64)(o))
#define NvUPtr_fmtX "X"
#define NvUPtr_fmtx "x"
#define NvUPtr_fmtu "u"
#define NvUPtr_fmto "o"
#define NvSPtr_fmtd "d"
#define NvSPtr_fmti "i"
#endif
#define NvP64_NULL (NvP64)0
/*!
* Helper macro to pack an @ref NvU64_ALIGN32 structure from a @ref NvU64.
*
* @param[out] pDst Pointer to NvU64_ALIGN32 structure to pack
* @param[in] pSrc Pointer to NvU64 with which to pack
*/
#define NvU64_ALIGN32_PACK(pDst, pSrc) \
do { \
(pDst)->lo = NvU64_LO32(*(pSrc)); \
(pDst)->hi = NvU64_HI32(*(pSrc)); \
} while (NV_FALSE)
/*!
* Helper macro to unpack a @ref NvU64_ALIGN32 structure into a @ref NvU64.
*
* @param[out] pDst Pointer to NvU64 in which to unpack
* @param[in] pSrc Pointer to NvU64_ALIGN32 structure from which to unpack
*/
#define NvU64_ALIGN32_UNPACK(pDst, pSrc) \
do { \
(*(pDst)) = NvU64_ALIGN32_VAL(pSrc); \
} while (NV_FALSE)
/*!
* Helper macro to unpack a @ref NvU64_ALIGN32 structure as a @ref NvU64.
*
* @param[in] pSrc Pointer to NvU64_ALIGN32 structure to unpack
*/
#define NvU64_ALIGN32_VAL(pSrc) \
((NvU64) ((NvU64)((pSrc)->lo) | (((NvU64)(pSrc)->hi) << 32U)))
/*!
* Helper macro to check whether the 32 bit aligned 64 bit number is zero.
*
* @param[in] _pU64 Pointer to NvU64_ALIGN32 structure.
*
* @return
* NV_TRUE _pU64 is zero.
* NV_FALSE otherwise.
*/
#define NvU64_ALIGN32_IS_ZERO(_pU64) \
(((_pU64)->lo == 0U) && ((_pU64)->hi == 0U))
/*!
* Helper macro to sub two 32 aligned 64 bit numbers on 64 bit processor.
*
* @param[in] pSrc1 Pointer to NvU64_ALIGN32 source 1 structure.
* @param[in] pSrc2 Pointer to NvU64_ALIGN32 source 2 structure.
* @param[in/out] pDst Pointer to NvU64_ALIGN32 dest. structure.
*/
#define NvU64_ALIGN32_ADD(pDst, pSrc1, pSrc2) \
do { \
NvU64 __dst, __src1, __scr2; \
\
NvU64_ALIGN32_UNPACK(&__src1, (pSrc1)); \
NvU64_ALIGN32_UNPACK(&__scr2, (pSrc2)); \
__dst = __src1 + __scr2; \
NvU64_ALIGN32_PACK((pDst), &__dst); \
} while (NV_FALSE)
/*!
* Helper macro to sub two 32 aligned 64 bit numbers on 64 bit processor.
*
* @param[in] pSrc1 Pointer to NvU64_ALIGN32 source 1 structure.
* @param[in] pSrc2 Pointer to NvU64_ALIGN32 source 2 structure.
* @param[in/out] pDst Pointer to NvU64_ALIGN32 dest. structure.
*/
#define NvU64_ALIGN32_SUB(pDst, pSrc1, pSrc2) \
do { \
NvU64 __dst, __src1, __scr2; \
\
NvU64_ALIGN32_UNPACK(&__src1, (pSrc1)); \
NvU64_ALIGN32_UNPACK(&__scr2, (pSrc2)); \
__dst = __src1 - __scr2; \
NvU64_ALIGN32_PACK((pDst), &__dst); \
} while (NV_FALSE)
/*!
* Structure for representing 32 bit aligned NvU64 (64-bit unsigned integer)
* structures. This structure must be used because the 32 bit processor and
* 64 bit processor compilers will pack/align NvU64 differently.
*
* One use case is RM being 64 bit proc whereas PMU being 32 bit proc, this
* alignment difference will result in corrupted transactions between the RM
* and PMU.
*
* See the @ref NvU64_ALIGN32_PACK and @ref NvU64_ALIGN32_UNPACK macros for
* packing and unpacking these structures.
*
* @note The intention of this structure is to provide a datatype which will
* packed/aligned consistently and efficiently across all platforms.
* We don't want to use "NV_DECLARE_ALIGNED(NvU64, 8)" because that
* leads to memory waste on our 32-bit uprocessors (e.g. FALCONs) where
* DMEM efficiency is vital.
*/
typedef struct
{
/*!
* Low 32 bits.
*/
NvU32 lo;
/*!
* High 32 bits.
*/
NvU32 hi;
} NvU64_ALIGN32;
/* Useful macro to hide required double cast */
#define NV_PTR_TO_NvP64(n) (NvP64)(NvUPtr)(n)
#define NV_SIGN_EXT_PTR_TO_NvP64(p) ((NvP64)(NvS64)(NvSPtr)(p))
#define KERNEL_POINTER_TO_NvP64(p) ((NvP64)(uintptr_t)(p))
/***************************************************************************\
|* *|
|* Limits for common types. *|
|* *|
\***************************************************************************/
/* Explanation of the current form of these limits:
*
* - Decimal is used, as hex values are by default positive.
* - Casts are not used, as usage in the preprocessor itself (#if) ends poorly.
* - The subtraction of 1 for some MIN values is used to get around the fact
* that the C syntax actually treats -x as NEGATE(x) instead of a distinct
* number. Since 214748648 isn't a valid positive 32-bit signed value, we
* take the largest valid positive signed number, negate it, and subtract 1.
*/
#define NV_S8_MIN (-128)
#define NV_S8_MAX (+127)
#define NV_U8_MIN (0U)
#define NV_U8_MAX (+255U)
#define NV_S16_MIN (-32768)
#define NV_S16_MAX (+32767)
#define NV_U16_MIN (0U)
#define NV_U16_MAX (+65535U)
#define NV_S32_MIN (-2147483647 - 1)
#define NV_S32_MAX (+2147483647)
#define NV_U32_MIN (0U)
#define NV_U32_MAX (+4294967295U)
#define NV_S64_MIN (-9223372036854775807LL - 1LL)
#define NV_S64_MAX (+9223372036854775807LL)
#define NV_U64_MIN (0ULL)
#define NV_U64_MAX (+18446744073709551615ULL)
/* Aligns fields in structs so they match up between 32 and 64 bit builds */
#if defined(__GNUC__) || defined(__clang__) || defined(NV_QNX)
#define NV_ALIGN_BYTES(size) __attribute__ ((aligned (size)))
#elif defined(__arm)
#define NV_ALIGN_BYTES(size) __align(ALIGN)
#else
// XXX This is dangerously nonportable! We really shouldn't provide a default
// version of this that doesn't do anything.
#define NV_ALIGN_BYTES(size)
#endif
// NV_DECLARE_ALIGNED() can be used on all platforms.
// This macro form accounts for the fact that __declspec on Windows is required
// before the variable type,
// and NV_ALIGN_BYTES is required after the variable name.
#if defined(__GNUC__) || defined(__clang__) || defined(NV_QNX)
#define NV_DECLARE_ALIGNED(TYPE_VAR, ALIGN) TYPE_VAR __attribute__ ((aligned (ALIGN)))
#elif defined(_MSC_VER)
#define NV_DECLARE_ALIGNED(TYPE_VAR, ALIGN) __declspec(align(ALIGN)) TYPE_VAR
#elif defined(__arm)
#define NV_DECLARE_ALIGNED(TYPE_VAR, ALIGN) __align(ALIGN) TYPE_VAR
#endif
/***************************************************************************\
|* Function Declaration Types *|
\***************************************************************************/
// stretching the meaning of "nvtypes", but this seems to least offensive
// place to re-locate these from nvos.h which cannot be included by a number
// of builds that need them
#if defined(_MSC_VER)
#if _MSC_VER >= 1310
#define NV_NOINLINE __declspec(noinline)
#else
#define NV_NOINLINE
#endif
#define NV_INLINE __inline
#if _MSC_VER >= 1200
#define NV_FORCEINLINE __forceinline
#else
#define NV_FORCEINLINE __inline
#endif
#define NV_APIENTRY __stdcall
#define NV_FASTCALL __fastcall
#define NV_CDECLCALL __cdecl
#define NV_STDCALL __stdcall
#define NV_FORCERESULTCHECK
#define NV_ATTRIBUTE_UNUSED
#define NV_FORMAT_PRINTF(_f, _a)
#else // ! defined(_MSC_VER)
#if defined(__GNUC__)
#if (__GNUC__ > 3) || \
((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1) && (__GNUC_PATCHLEVEL__ >= 1))
#define NV_NOINLINE __attribute__((__noinline__))
#endif
#elif defined(__clang__)
#if __has_attribute(noinline)
#define NV_NOINLINE __attribute__((__noinline__))
#endif
#elif defined(__arm) && (__ARMCC_VERSION >= 300000)
#define NV_NOINLINE __attribute__((__noinline__))
#elif (defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)) ||\
(defined(__SUNPRO_CC) && (__SUNPRO_CC >= 0x590))
#define NV_NOINLINE __attribute__((__noinline__))
#elif defined (__INTEL_COMPILER)
#define NV_NOINLINE __attribute__((__noinline__))
#endif
#if !defined(NV_NOINLINE)
#define NV_NOINLINE
#endif
/* GreenHills compiler defines __GNUC__, but doesn't support
* __inline__ keyword. */
#if defined(__ghs__)
#define NV_INLINE inline
#elif defined(__GNUC__) || defined(__clang__) || defined(__INTEL_COMPILER)
#define NV_INLINE __inline__
#elif defined (macintosh) || defined(__SUNPRO_C) || defined(__SUNPRO_CC)
#define NV_INLINE inline
#elif defined(__arm)
#define NV_INLINE __inline
#else
#define NV_INLINE
#endif
/* Don't force inline on DEBUG builds -- it's annoying for debuggers. */
#if !defined(DEBUG)
/* GreenHills compiler defines __GNUC__, but doesn't support
* __attribute__ or __inline__ keyword. */
#if defined(__ghs__)
#define NV_FORCEINLINE inline
#elif defined(__GNUC__)
// GCC 3.1 and beyond support the always_inline function attribute.
#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1))
#define NV_FORCEINLINE __attribute__((__always_inline__)) __inline__
#else
#define NV_FORCEINLINE __inline__
#endif
#elif defined(__clang__)
#if __has_attribute(always_inline)
#define NV_FORCEINLINE __attribute__((__always_inline__)) __inline__
#else
#define NV_FORCEINLINE __inline__
#endif
#elif defined(__arm) && (__ARMCC_VERSION >= 220000)
// RVDS 2.2 also supports forceinline, but ADS 1.2 does not
#define NV_FORCEINLINE __forceinline
#else /* defined(__GNUC__) */
#define NV_FORCEINLINE NV_INLINE
#endif
#else
#define NV_FORCEINLINE NV_INLINE
#endif
#define NV_APIENTRY
#define NV_FASTCALL
#define NV_CDECLCALL
#define NV_STDCALL
/*
* The 'warn_unused_result' function attribute prompts GCC to issue a
* warning if the result of a function tagged with this attribute
* is ignored by a caller. In combination with '-Werror', it can be
* used to enforce result checking in RM code; at this point, this
* is only done on UNIX.
*/
#if defined(__GNUC__) && defined(NV_UNIX)
#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 4))
#define NV_FORCERESULTCHECK __attribute__((__warn_unused_result__))
#else
#define NV_FORCERESULTCHECK
#endif
#elif defined(__clang__)
#if __has_attribute(warn_unused_result)
#define NV_FORCERESULTCHECK __attribute__((__warn_unused_result__))
#else
#define NV_FORCERESULTCHECK
#endif
#else /* defined(__GNUC__) */
#define NV_FORCERESULTCHECK
#endif
#if defined(__GNUC__) || defined(__clang__) || defined(__INTEL_COMPILER)
#define NV_ATTRIBUTE_UNUSED __attribute__((__unused__))
#else
#define NV_ATTRIBUTE_UNUSED
#endif
/*
* Functions decorated with NV_FORMAT_PRINTF(f, a) have a format string at
* parameter number 'f' and variadic arguments start at parameter number 'a'.
* (Note that for C++ methods, there is an implicit 'this' parameter so
* explicit parameters are numbered from 2.)
*/
#if defined(__GNUC__)
#define NV_FORMAT_PRINTF(_f, _a) __attribute__((format(printf, _f, _a)))
#else
#define NV_FORMAT_PRINTF(_f, _a)
#endif
#endif // defined(_MSC_VER)
#ifdef __cplusplus
}
#endif
#endif /* NVTYPES_INCLUDED */

View File

@@ -0,0 +1,255 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1999-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/*
* Os interface definitions needed by os-interface.c
*/
#ifndef OS_INTERFACE_H
#define OS_INTERFACE_H
/******************* Operating System Interface Routines *******************\
* *
* Operating system wrapper functions used to abstract the OS. *
* *
\***************************************************************************/
#include <nvtypes.h>
#include <nvstatus.h>
#include "nv_stdarg.h"
#include <nv-kernel-interface-api.h>
#include <os/nv_memory_type.h>
#include <nv-caps.h>
typedef struct
{
NvU32 os_major_version;
NvU32 os_minor_version;
NvU32 os_build_number;
const char * os_build_version_str;
const char * os_build_date_plus_str;
}os_version_info;
/* Each OS defines its own version of this opaque type */
struct os_work_queue;
/* Each OS defines its own version of this opaque type */
typedef struct os_wait_queue os_wait_queue;
/*
* ---------------------------------------------------------------------------
*
* Function prototypes for OS interface.
*
* ---------------------------------------------------------------------------
*/
NvU64 NV_API_CALL os_get_num_phys_pages (void);
NV_STATUS NV_API_CALL os_alloc_mem (void **, NvU64);
void NV_API_CALL os_free_mem (void *);
NV_STATUS NV_API_CALL os_get_current_time (NvU32 *, NvU32 *);
NvU64 NV_API_CALL os_get_current_tick (void);
NvU64 NV_API_CALL os_get_current_tick_hr (void);
NvU64 NV_API_CALL os_get_tick_resolution (void);
NV_STATUS NV_API_CALL os_delay (NvU32);
NV_STATUS NV_API_CALL os_delay_us (NvU32);
NvU64 NV_API_CALL os_get_cpu_frequency (void);
NvU32 NV_API_CALL os_get_current_process (void);
void NV_API_CALL os_get_current_process_name (char *, NvU32);
NV_STATUS NV_API_CALL os_get_current_thread (NvU64 *);
char* NV_API_CALL os_string_copy (char *, const char *);
NvU32 NV_API_CALL os_string_length (const char *);
NvU32 NV_API_CALL os_strtoul (const char *, char **, NvU32);
NvS32 NV_API_CALL os_string_compare (const char *, const char *);
NvS32 NV_API_CALL os_snprintf (char *, NvU32, const char *, ...);
NvS32 NV_API_CALL os_vsnprintf (char *, NvU32, const char *, va_list);
void NV_API_CALL os_log_error (const char *, va_list);
void* NV_API_CALL os_mem_copy (void *, const void *, NvU32);
NV_STATUS NV_API_CALL os_memcpy_from_user (void *, const void *, NvU32);
NV_STATUS NV_API_CALL os_memcpy_to_user (void *, const void *, NvU32);
void* NV_API_CALL os_mem_set (void *, NvU8, NvU32);
NvS32 NV_API_CALL os_mem_cmp (const NvU8 *, const NvU8 *, NvU32);
void* NV_API_CALL os_pci_init_handle (NvU32, NvU8, NvU8, NvU8, NvU16 *, NvU16 *);
NV_STATUS NV_API_CALL os_pci_read_byte (void *, NvU32, NvU8 *);
NV_STATUS NV_API_CALL os_pci_read_word (void *, NvU32, NvU16 *);
NV_STATUS NV_API_CALL os_pci_read_dword (void *, NvU32, NvU32 *);
NV_STATUS NV_API_CALL os_pci_write_byte (void *, NvU32, NvU8);
NV_STATUS NV_API_CALL os_pci_write_word (void *, NvU32, NvU16);
NV_STATUS NV_API_CALL os_pci_write_dword (void *, NvU32, NvU32);
NvBool NV_API_CALL os_pci_remove_supported (void);
void NV_API_CALL os_pci_remove (void *);
void* NV_API_CALL os_map_kernel_space (NvU64, NvU64, NvU32);
void NV_API_CALL os_unmap_kernel_space (void *, NvU64);
void* NV_API_CALL os_map_user_space (NvU64, NvU64, NvU32, NvU32, void **);
void NV_API_CALL os_unmap_user_space (void *, NvU64, void *);
NV_STATUS NV_API_CALL os_flush_cpu_cache (void);
NV_STATUS NV_API_CALL os_flush_cpu_cache_all (void);
NV_STATUS NV_API_CALL os_flush_user_cache (void);
void NV_API_CALL os_flush_cpu_write_combine_buffer(void);
NvU8 NV_API_CALL os_io_read_byte (NvU32);
NvU16 NV_API_CALL os_io_read_word (NvU32);
NvU32 NV_API_CALL os_io_read_dword (NvU32);
void NV_API_CALL os_io_write_byte (NvU32, NvU8);
void NV_API_CALL os_io_write_word (NvU32, NvU16);
void NV_API_CALL os_io_write_dword (NvU32, NvU32);
NvBool NV_API_CALL os_is_administrator (void);
NvBool NV_API_CALL os_allow_priority_override (void);
void NV_API_CALL os_dbg_init (void);
void NV_API_CALL os_dbg_breakpoint (void);
void NV_API_CALL os_dbg_set_level (NvU32);
NvU32 NV_API_CALL os_get_cpu_count (void);
NvU32 NV_API_CALL os_get_cpu_number (void);
void NV_API_CALL os_disable_console_access (void);
void NV_API_CALL os_enable_console_access (void);
NV_STATUS NV_API_CALL os_registry_init (void);
NV_STATUS NV_API_CALL os_schedule (void);
NV_STATUS NV_API_CALL os_alloc_spinlock (void **);
void NV_API_CALL os_free_spinlock (void *);
NvU64 NV_API_CALL os_acquire_spinlock (void *);
void NV_API_CALL os_release_spinlock (void *, NvU64);
NV_STATUS NV_API_CALL os_queue_work_item (struct os_work_queue *, void *);
NV_STATUS NV_API_CALL os_flush_work_queue (struct os_work_queue *);
NV_STATUS NV_API_CALL os_alloc_mutex (void **);
void NV_API_CALL os_free_mutex (void *);
NV_STATUS NV_API_CALL os_acquire_mutex (void *);
NV_STATUS NV_API_CALL os_cond_acquire_mutex (void *);
void NV_API_CALL os_release_mutex (void *);
void* NV_API_CALL os_alloc_semaphore (NvU32);
void NV_API_CALL os_free_semaphore (void *);
NV_STATUS NV_API_CALL os_acquire_semaphore (void *);
NV_STATUS NV_API_CALL os_cond_acquire_semaphore (void *);
NV_STATUS NV_API_CALL os_release_semaphore (void *);
NvBool NV_API_CALL os_semaphore_may_sleep (void);
NV_STATUS NV_API_CALL os_get_version_info (os_version_info*);
NvBool NV_API_CALL os_is_isr (void);
NvBool NV_API_CALL os_pat_supported (void);
void NV_API_CALL os_dump_stack (void);
NvBool NV_API_CALL os_is_efi_enabled (void);
NvBool NV_API_CALL os_is_xen_dom0 (void);
NvBool NV_API_CALL os_is_vgx_hyper (void);
NV_STATUS NV_API_CALL os_inject_vgx_msi (NvU16, NvU64, NvU32);
NvBool NV_API_CALL os_is_grid_supported (void);
NvU32 NV_API_CALL os_get_grid_csp_support (void);
void NV_API_CALL os_get_screen_info (NvU64 *, NvU16 *, NvU16 *, NvU16 *, NvU16 *, NvU64, NvU64);
void NV_API_CALL os_bug_check (NvU32, const char *);
NV_STATUS NV_API_CALL os_lock_user_pages (void *, NvU64, void **, NvU32);
NV_STATUS NV_API_CALL os_lookup_user_io_memory (void *, NvU64, NvU64 **, void**);
NV_STATUS NV_API_CALL os_unlock_user_pages (NvU64, void *);
NV_STATUS NV_API_CALL os_match_mmap_offset (void *, NvU64, NvU64 *);
NV_STATUS NV_API_CALL os_get_euid (NvU32 *);
NV_STATUS NV_API_CALL os_get_smbios_header (NvU64 *pSmbsAddr);
NV_STATUS NV_API_CALL os_get_acpi_rsdp_from_uefi (NvU32 *);
void NV_API_CALL os_add_record_for_crashLog (void *, NvU32);
void NV_API_CALL os_delete_record_for_crashLog (void *);
NV_STATUS NV_API_CALL os_call_vgpu_vfio (void *, NvU32);
NV_STATUS NV_API_CALL os_numa_memblock_size (NvU64 *);
NV_STATUS NV_API_CALL os_alloc_pages_node (NvS32, NvU32, NvU32, NvU64 *);
NV_STATUS NV_API_CALL os_get_page (NvU64 address);
NV_STATUS NV_API_CALL os_put_page (NvU64 address);
NvU32 NV_API_CALL os_get_page_refcount (NvU64 address);
NvU32 NV_API_CALL os_count_tail_pages (NvU64 address);
void NV_API_CALL os_free_pages_phys (NvU64, NvU32);
NV_STATUS NV_API_CALL os_call_nv_vmbus (NvU32, void *);
NV_STATUS NV_API_CALL os_open_temporary_file (void **);
void NV_API_CALL os_close_file (void *);
NV_STATUS NV_API_CALL os_write_file (void *, NvU8 *, NvU64, NvU64);
NV_STATUS NV_API_CALL os_read_file (void *, NvU8 *, NvU64, NvU64);
NV_STATUS NV_API_CALL os_open_readonly_file (const char *, void **);
NV_STATUS NV_API_CALL os_open_and_read_file (const char *, NvU8 *, NvU64);
NvBool NV_API_CALL os_is_nvswitch_present (void);
void NV_API_CALL os_get_random_bytes (NvU8 *, NvU16);
NV_STATUS NV_API_CALL os_alloc_wait_queue (os_wait_queue **);
void NV_API_CALL os_free_wait_queue (os_wait_queue *);
void NV_API_CALL os_wait_uninterruptible (os_wait_queue *);
void NV_API_CALL os_wait_interruptible (os_wait_queue *);
void NV_API_CALL os_wake_up (os_wait_queue *);
nv_cap_t* NV_API_CALL os_nv_cap_init (const char *);
nv_cap_t* NV_API_CALL os_nv_cap_create_dir_entry (nv_cap_t *, const char *, int);
nv_cap_t* NV_API_CALL os_nv_cap_create_file_entry (nv_cap_t *, const char *, int);
void NV_API_CALL os_nv_cap_destroy_entry (nv_cap_t *);
int NV_API_CALL os_nv_cap_validate_and_dup_fd(const nv_cap_t *, int);
void NV_API_CALL os_nv_cap_close_fd (int);
extern NvU32 os_page_size;
extern NvU64 os_page_mask;
extern NvU8 os_page_shift;
extern NvU32 os_sev_status;
extern NvBool os_sev_enabled;
extern NvBool os_dma_buf_enabled;
/*
* ---------------------------------------------------------------------------
*
* Debug macros.
*
* ---------------------------------------------------------------------------
*/
#define NV_DBG_INFO 0x0
#define NV_DBG_SETUP 0x1
#define NV_DBG_USERERRORS 0x2
#define NV_DBG_WARNINGS 0x3
#define NV_DBG_ERRORS 0x4
void NV_API_CALL out_string(const char *str);
int NV_API_CALL nv_printf(NvU32 debuglevel, const char *printf_format, ...);
#define NV_DEV_PRINTF(debuglevel, nv, format, ... ) \
nv_printf(debuglevel, "NVRM: GPU " NV_PCI_DEV_FMT ": " format, NV_PCI_DEV_FMT_ARGS(nv), ## __VA_ARGS__)
#define NV_DEV_PRINTF_STATUS(debuglevel, nv, status, format, ... ) \
nv_printf(debuglevel, "NVRM: GPU " NV_PCI_DEV_FMT ": " format " (0x%x)\n", NV_PCI_DEV_FMT_ARGS(nv), ## __VA_ARGS__, status)
/*
* Fields for os_lock_user_pages flags parameter
*/
#define NV_LOCK_USER_PAGES_FLAGS_WRITE 0:0
#define NV_LOCK_USER_PAGES_FLAGS_WRITE_NO 0x00000000
#define NV_LOCK_USER_PAGES_FLAGS_WRITE_YES 0x00000001
#endif /* OS_INTERFACE_H */

View File

@@ -0,0 +1,41 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2020 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NV_MEMORY_TYPE_H
#define NV_MEMORY_TYPE_H
#define NV_MEMORY_NONCONTIGUOUS 0
#define NV_MEMORY_CONTIGUOUS 1
#define NV_MEMORY_CACHED 0
#define NV_MEMORY_UNCACHED 1
#define NV_MEMORY_WRITECOMBINED 2
#define NV_MEMORY_WRITEBACK 5
#define NV_MEMORY_DEFAULT 6
#define NV_MEMORY_UNCACHED_WEAK 7
#define NV_PROTECT_READABLE 1
#define NV_PROTECT_WRITEABLE 2
#define NV_PROTECT_READ_WRITE (NV_PROTECT_READABLE | NV_PROTECT_WRITEABLE)
#endif /* NV_MEMORY_TYPE_H */

View File

@@ -0,0 +1,110 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1999-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _RM_GPU_OPS_H_
#define _RM_GPU_OPS_H_
#include <nvtypes.h>
#include <nvCpuUuid.h>
#include "nv_stdarg.h"
#include <nv-ioctl.h>
#include <nvmisc.h>
NV_STATUS NV_API_CALL rm_gpu_ops_create_session (nvidia_stack_t *, nvgpuSessionHandle_t *);
NV_STATUS NV_API_CALL rm_gpu_ops_destroy_session (nvidia_stack_t *, nvgpuSessionHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_device_create (nvidia_stack_t *, nvgpuSessionHandle_t, const nvgpuInfo_t *, const NvProcessorUuid *, nvgpuDeviceHandle_t *, NvBool);
NV_STATUS NV_API_CALL rm_gpu_ops_device_destroy (nvidia_stack_t *, nvgpuDeviceHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_address_space_create(nvidia_stack_t *, nvgpuDeviceHandle_t, unsigned long long, unsigned long long, nvgpuAddressSpaceHandle_t *, nvgpuAddressSpaceInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_dup_address_space(nvidia_stack_t *, nvgpuDeviceHandle_t, NvHandle, NvHandle, nvgpuAddressSpaceHandle_t *, nvgpuAddressSpaceInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_address_space_destroy(nvidia_stack_t *, nvgpuAddressSpaceHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_memory_alloc_fb(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvLength, NvU64 *, nvgpuAllocInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_pma_alloc_pages(nvidia_stack_t *, void *, NvLength, NvU32 , nvgpuPmaAllocationOptions_t, NvU64 *);
NV_STATUS NV_API_CALL rm_gpu_ops_pma_free_pages(nvidia_stack_t *, void *, NvU64 *, NvLength , NvU32, NvU32);
NV_STATUS NV_API_CALL rm_gpu_ops_pma_pin_pages(nvidia_stack_t *, void *, NvU64 *, NvLength , NvU32, NvU32);
NV_STATUS NV_API_CALL rm_gpu_ops_pma_unpin_pages(nvidia_stack_t *, void *, NvU64 *, NvLength , NvU32);
NV_STATUS NV_API_CALL rm_gpu_ops_get_pma_object(nvidia_stack_t *, nvgpuDeviceHandle_t, void **, const nvgpuPmaStatistics_t *);
NV_STATUS NV_API_CALL rm_gpu_ops_pma_register_callbacks(nvidia_stack_t *sp, void *, nvPmaEvictPagesCallback, nvPmaEvictRangeCallback, void *);
void NV_API_CALL rm_gpu_ops_pma_unregister_callbacks(nvidia_stack_t *sp, void *);
NV_STATUS NV_API_CALL rm_gpu_ops_memory_alloc_sys(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvLength, NvU64 *, nvgpuAllocInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_get_p2p_caps(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuDeviceHandle_t, nvgpuP2PCapsParams_t);
NV_STATUS NV_API_CALL rm_gpu_ops_memory_cpu_map(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64, NvLength, void **, NvU32);
NV_STATUS NV_API_CALL rm_gpu_ops_memory_cpu_ummap(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, void*);
NV_STATUS NV_API_CALL rm_gpu_ops_channel_allocate(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, const nvgpuChannelAllocParams_t *, nvgpuChannelHandle_t *, nvgpuChannelInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_channel_destroy(nvidia_stack_t *, nvgpuChannelHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_memory_free(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64);
NV_STATUS NV_API_CALL rm_gpu_ops_query_caps(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuCaps_t);
NV_STATUS NV_API_CALL rm_gpu_ops_query_ces_caps(nvidia_stack_t *sp, nvgpuDeviceHandle_t, nvgpuCesCaps_t);
NV_STATUS NV_API_CALL rm_gpu_ops_get_gpu_info(nvidia_stack_t *, const NvProcessorUuid *pUuid, const nvgpuClientInfo_t *, nvgpuInfo_t *);
NV_STATUS NV_API_CALL rm_gpu_ops_service_device_interrupts_rm(nvidia_stack_t *, nvgpuDeviceHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_dup_allocation(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64, nvgpuAddressSpaceHandle_t, NvU64 *);
NV_STATUS NV_API_CALL rm_gpu_ops_dup_memory (nvidia_stack_t *, nvgpuDeviceHandle_t, NvHandle, NvHandle, NvHandle *, nvgpuMemoryInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_free_duped_handle(nvidia_stack_t *, nvgpuDeviceHandle_t, NvHandle);
NV_STATUS NV_API_CALL rm_gpu_ops_get_fb_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuFbInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_get_ecc_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuEccInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_own_page_fault_intr(nvidia_stack_t *, nvgpuDeviceHandle_t, NvBool);
NV_STATUS NV_API_CALL rm_gpu_ops_init_fault_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuFaultInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_destroy_fault_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuFaultInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_get_non_replayable_faults(nvidia_stack_t *, nvgpuFaultInfo_t, void *, NvU32 *);
NV_STATUS NV_API_CALL rm_gpu_ops_has_pending_non_replayable_faults(nvidia_stack_t *, nvgpuFaultInfo_t, NvBool *);
NV_STATUS NV_API_CALL rm_gpu_ops_init_access_cntr_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuAccessCntrInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_destroy_access_cntr_info(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuAccessCntrInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_own_access_cntr_intr(nvidia_stack_t *, nvgpuSessionHandle_t, nvgpuAccessCntrInfo_t, NvBool);
NV_STATUS NV_API_CALL rm_gpu_ops_enable_access_cntr(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuAccessCntrInfo_t, nvgpuAccessCntrConfig_t);
NV_STATUS NV_API_CALL rm_gpu_ops_disable_access_cntr(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuAccessCntrInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_set_page_directory (nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64, unsigned, NvBool, NvU32);
NV_STATUS NV_API_CALL rm_gpu_ops_unset_page_directory (nvidia_stack_t *, nvgpuAddressSpaceHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_p2p_object_create(nvidia_stack_t *, nvgpuDeviceHandle_t, nvgpuDeviceHandle_t, NvHandle *);
void NV_API_CALL rm_gpu_ops_p2p_object_destroy(nvidia_stack_t *, nvgpuSessionHandle_t, NvHandle);
NV_STATUS NV_API_CALL rm_gpu_ops_get_external_alloc_ptes(nvidia_stack_t*, nvgpuAddressSpaceHandle_t, NvHandle, NvU64, NvU64, nvgpuExternalMappingInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_retain_channel(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvHandle, NvHandle, void **, nvgpuChannelInstanceInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_bind_channel_resources(nvidia_stack_t *, void *, nvgpuChannelResourceBindParams_t);
void NV_API_CALL rm_gpu_ops_release_channel(nvidia_stack_t *, void *);
void NV_API_CALL rm_gpu_ops_stop_channel(nvidia_stack_t *, void *, NvBool);
NV_STATUS NV_API_CALL rm_gpu_ops_get_channel_resource_ptes(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvP64, NvU64, NvU64, nvgpuExternalMappingInfo_t);
NV_STATUS NV_API_CALL rm_gpu_ops_report_non_replayable_fault(nvidia_stack_t *, nvgpuDeviceHandle_t, const void *);
NV_STATUS NV_API_CALL rm_gpu_ops_paging_channel_allocate(nvidia_stack_t *, nvgpuDeviceHandle_t, const nvgpuPagingChannelAllocParams_t *, nvgpuPagingChannelHandle_t *, nvgpuPagingChannelInfo_t);
void NV_API_CALL rm_gpu_ops_paging_channel_destroy(nvidia_stack_t *, nvgpuPagingChannelHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_paging_channels_map(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64, nvgpuDeviceHandle_t, NvU64 *);
void NV_API_CALL rm_gpu_ops_paging_channels_unmap(nvidia_stack_t *, nvgpuAddressSpaceHandle_t, NvU64, nvgpuDeviceHandle_t);
NV_STATUS NV_API_CALL rm_gpu_ops_paging_channel_push_stream(nvidia_stack_t *, nvgpuPagingChannelHandle_t, char *, NvU32);
#endif

5759
kernel-open/conftest.sh Executable file

File diff suppressed because it is too large Load Diff

12
kernel-open/dkms.conf Normal file
View File

@@ -0,0 +1,12 @@
PACKAGE_NAME="nvidia"
PACKAGE_VERSION="__VERSION_STRING"
AUTOINSTALL="yes"
# By default, DKMS will add KERNELRELEASE to the make command line; however,
# this will cause the kernel module build to infer that it was invoked via
# Kbuild directly instead of DKMS. The dkms(8) manual page recommends quoting
# the 'make' command name to suppress this behavior.
MAKE[0]="'make' -j__JOBS NV_EXCLUDE_BUILD_MODULES='__EXCLUDE_MODULES' KERNEL_UNAME=${kernelver} modules"
# The list of kernel modules will be generated by nvidia-installer at runtime.
__DKMS_MODULES

View File

@@ -0,0 +1,79 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include "nv-pci-table.h"
/* Devices supported by RM */
struct pci_device_id nv_pci_table[] = {
{
.vendor = PCI_VENDOR_ID_NVIDIA,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class = (PCI_CLASS_DISPLAY_VGA << 8),
.class_mask = ~0
},
{
.vendor = PCI_VENDOR_ID_NVIDIA,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class = (PCI_CLASS_DISPLAY_3D << 8),
.class_mask = ~0
},
{ }
};
/* Devices supported by all drivers in nvidia.ko */
struct pci_device_id nv_module_device_table[] = {
{
.vendor = PCI_VENDOR_ID_NVIDIA,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class = (PCI_CLASS_DISPLAY_VGA << 8),
.class_mask = ~0
},
{
.vendor = PCI_VENDOR_ID_NVIDIA,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class = (PCI_CLASS_DISPLAY_3D << 8),
.class_mask = ~0
},
{
.vendor = PCI_VENDOR_ID_NVIDIA,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class = (PCI_CLASS_BRIDGE_OTHER << 8),
.class_mask = ~0
},
{ }
};
MODULE_DEVICE_TABLE(pci, nv_module_device_table);

View File

@@ -0,0 +1,31 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2015 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _NV_PCI_TABLE_H_
#define _NV_PCI_TABLE_H_
#include <linux/pci.h>
extern struct pci_device_id nv_pci_table[];
#endif /* _NV_PCI_TABLE_H_ */

View File

@@ -0,0 +1,121 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DMA_FENCE_HELPER_H__
#define __NVIDIA_DMA_FENCE_HELPER_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_FENCE_AVAILABLE)
/*
* Fence headers are moved to file dma-fence.h and struct fence has
* been renamed to dma_fence by commit -
*
* 2016-10-25 : f54d1867005c3323f5d8ad83eed823e84226c429
*/
#if defined(NV_LINUX_FENCE_H_PRESENT)
#include <linux/fence.h>
#else
#include <linux/dma-fence.h>
#endif
#if defined(NV_LINUX_FENCE_H_PRESENT)
typedef struct fence nv_dma_fence_t;
typedef struct fence_ops nv_dma_fence_ops_t;
#else
typedef struct dma_fence nv_dma_fence_t;
typedef struct dma_fence_ops nv_dma_fence_ops_t;
#endif
#if defined(NV_LINUX_FENCE_H_PRESENT)
#define NV_DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT FENCE_FLAG_ENABLE_SIGNAL_BIT
#else
#define NV_DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT
#endif
static inline bool nv_dma_fence_is_signaled(nv_dma_fence_t *fence) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
return fence_is_signaled(fence);
#else
return dma_fence_is_signaled(fence);
#endif
}
static inline nv_dma_fence_t *nv_dma_fence_get(nv_dma_fence_t *fence)
{
#if defined(NV_LINUX_FENCE_H_PRESENT)
return fence_get(fence);
#else
return dma_fence_get(fence);
#endif
}
static inline void nv_dma_fence_put(nv_dma_fence_t *fence) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
fence_put(fence);
#else
dma_fence_put(fence);
#endif
}
static inline signed long
nv_dma_fence_default_wait(nv_dma_fence_t *fence,
bool intr, signed long timeout) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
return fence_default_wait(fence, intr, timeout);
#else
return dma_fence_default_wait(fence, intr, timeout);
#endif
}
static inline int nv_dma_fence_signal(nv_dma_fence_t *fence) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
return fence_signal(fence);
#else
return dma_fence_signal(fence);
#endif
}
static inline u64 nv_dma_fence_context_alloc(unsigned num) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
return fence_context_alloc(num);
#else
return dma_fence_context_alloc(num);
#endif
}
static inline void
nv_dma_fence_init(nv_dma_fence_t *fence,
const nv_dma_fence_ops_t *ops,
spinlock_t *lock, u64 context, unsigned seqno) {
#if defined(NV_LINUX_FENCE_H_PRESENT)
fence_init(fence, ops, lock, context, seqno);
#else
dma_fence_init(fence, ops, lock, context, seqno);
#endif
}
#endif /* defined(NV_DRM_FENCE_AVAILABLE) */
#endif /* __NVIDIA_DMA_FENCE_HELPER_H__ */

View File

@@ -0,0 +1,80 @@
/*
* Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DMA_RESV_HELPER_H__
#define __NVIDIA_DMA_RESV_HELPER_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_FENCE_AVAILABLE)
/*
* linux/reservation.h is renamed to linux/dma-resv.h, by commit
* 52791eeec1d9 (dma-buf: rename reservation_object to dma_resv)
* in v5.4.
*/
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
#include <linux/dma-resv.h>
#else
#include <linux/reservation.h>
#endif
#include <nvidia-dma-fence-helper.h>
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
typedef struct dma_resv nv_dma_resv_t;
#else
typedef struct reservation_object nv_dma_resv_t;
#endif
static inline void nv_dma_resv_init(nv_dma_resv_t *obj)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
dma_resv_init(obj);
#else
reservation_object_init(obj);
#endif
}
static inline void nv_dma_resv_fini(nv_dma_resv_t *obj)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
dma_resv_fini(obj);
#else
reservation_object_init(obj);
#endif
}
static inline void nv_dma_resv_add_excl_fence(nv_dma_resv_t *obj,
nv_dma_fence_t *fence)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
dma_resv_add_excl_fence(obj, fence);
#else
reservation_object_add_excl_fence(obj, fence);
#endif
}
#endif /* defined(NV_DRM_FENCE_AVAILABLE) */
#endif /* __NVIDIA_DMA_RESV_HELPER_H__ */

View File

@@ -0,0 +1,64 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_CONFTEST_H__
#define __NVIDIA_DRM_CONFTEST_H__
#include "conftest.h"
/*
* NOTE: This file is expected to get included at the top before including any
* of linux/drm headers.
*
* The goal is to redefine refcount_dec_and_test and refcount_inc before
* including drm header files, so that the drm macro/inline calls to
* refcount_dec_and_test* and refcount_inc get redirected to
* alternate implementation in this file.
*/
#if NV_IS_EXPORT_SYMBOL_GPL_refcount_inc
#include <linux/refcount.h>
#define refcount_inc(__ptr) \
do { \
atomic_inc(&(__ptr)->refs); \
} while(0)
#endif
#if NV_IS_EXPORT_SYMBOL_GPL_refcount_dec_and_test
#include <linux/refcount.h>
#define refcount_dec_and_test(__ptr) atomic_dec_and_test(&(__ptr)->refs)
#endif
#if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ) || \
defined(NV_DRM_GEM_OBJECT_HAS_RESV)
#define NV_DRM_FENCE_AVAILABLE
#else
#undef NV_DRM_FENCE_AVAILABLE
#endif
#endif /* defined(__NVIDIA_DRM_CONFTEST_H__) */

View File

@@ -0,0 +1,467 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-helper.h"
#include "nvidia-drm-priv.h"
#include "nvidia-drm-connector.h"
#include "nvidia-drm-utils.h"
#include "nvidia-drm-encoder.h"
/*
* Commit fcd70cd36b9b ("drm: Split out drm_probe_helper.h")
* moves a number of helper function definitions from
* drm/drm_crtc_helper.h to a new drm_probe_helper.h.
*/
#if defined(NV_DRM_DRM_PROBE_HELPER_H_PRESENT)
#include <drm/drm_probe_helper.h>
#endif
#include <drm/drm_crtc_helper.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
static void nv_drm_connector_destroy(struct drm_connector *connector)
{
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
drm_connector_unregister(connector);
drm_connector_cleanup(connector);
if (nv_connector->edid != NULL) {
nv_drm_free(nv_connector->edid);
}
nv_drm_free(nv_connector);
}
static bool
__nv_drm_detect_encoder(struct NvKmsKapiDynamicDisplayParams *pDetectParams,
struct drm_connector *connector,
struct drm_encoder *encoder)
{
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
struct drm_device *dev = connector->dev;
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_encoder *nv_encoder;
/*
* DVI-I connectors can drive both digital and analog
* encoders. If a digital connection has been forced then
* skip analog encoders.
*/
if (connector->connector_type == DRM_MODE_CONNECTOR_DVII &&
connector->force == DRM_FORCE_ON_DIGITAL &&
encoder->encoder_type == DRM_MODE_ENCODER_DAC) {
return false;
}
nv_encoder = to_nv_encoder(encoder);
memset(pDetectParams, 0, sizeof(*pDetectParams));
pDetectParams->handle = nv_encoder->hDisplay;
switch (connector->force) {
case DRM_FORCE_ON:
case DRM_FORCE_ON_DIGITAL:
pDetectParams->forceConnected = NV_TRUE;
break;
case DRM_FORCE_OFF:
pDetectParams->forceDisconnected = NV_TRUE;
break;
case DRM_FORCE_UNSPECIFIED:
break;
}
if (connector->override_edid) {
const struct drm_property_blob *edid = connector->edid_blob_ptr;
if (edid->length <= sizeof(pDetectParams->edid.buffer)) {
memcpy(pDetectParams->edid.buffer, edid->data, edid->length);
pDetectParams->edid.bufferSize = edid->length;
pDetectParams->overrideEdid = NV_TRUE;
} else {
WARN_ON(edid->length >
sizeof(pDetectParams->edid.buffer));
}
}
if (!nvKms->getDynamicDisplayInfo(nv_dev->pDevice, pDetectParams)) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to detect display state");
return false;
}
if (pDetectParams->connected) {
if (!pDetectParams->overrideEdid && pDetectParams->edid.bufferSize) {
if ((nv_connector->edid = nv_drm_calloc(
1,
pDetectParams->edid.bufferSize)) != NULL) {
memcpy(nv_connector->edid,
pDetectParams->edid.buffer,
pDetectParams->edid.bufferSize);
} else {
NV_DRM_LOG_ERR("Out of Memory");
}
}
return true;
}
return false;
}
static enum drm_connector_status __nv_drm_connector_detect_internal(
struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
enum drm_connector_status status = connector_status_disconnected;
struct drm_encoder *detected_encoder = NULL;
struct nv_drm_encoder *nv_detected_encoder = NULL;
struct drm_encoder *encoder;
struct NvKmsKapiDynamicDisplayParams *pDetectParams = NULL;
BUG_ON(!mutex_is_locked(&dev->mode_config.mutex));
if (nv_connector->edid != NULL) {
nv_drm_free(nv_connector->edid);
nv_connector->edid = NULL;
}
if ((pDetectParams = nv_drm_calloc(
1,
sizeof(*pDetectParams))) == NULL) {
WARN_ON(pDetectParams == NULL);
goto done;
}
nv_drm_connector_for_each_possible_encoder(connector, encoder) {
if (__nv_drm_detect_encoder(pDetectParams, connector, encoder)) {
detected_encoder = encoder;
break;
}
} nv_drm_connector_for_each_possible_encoder_end;
if (detected_encoder == NULL) {
goto done;
}
nv_detected_encoder = to_nv_encoder(detected_encoder);
status = connector_status_connected;
nv_connector->nv_detected_encoder = nv_detected_encoder;
if (nv_connector->type == NVKMS_CONNECTOR_TYPE_DVI_I) {
drm_object_property_set_value(
&connector->base,
dev->mode_config.dvi_i_subconnector_property,
detected_encoder->encoder_type == DRM_MODE_ENCODER_DAC ?
DRM_MODE_SUBCONNECTOR_DVIA :
DRM_MODE_SUBCONNECTOR_DVID);
}
done:
nv_drm_free(pDetectParams);
return status;
}
static void __nv_drm_connector_force(struct drm_connector *connector)
{
__nv_drm_connector_detect_internal(connector);
}
static enum drm_connector_status
nv_drm_connector_detect(struct drm_connector *connector, bool force)
{
return __nv_drm_connector_detect_internal(connector);
}
static struct drm_connector_funcs nv_connector_funcs = {
#if defined NV_DRM_ATOMIC_HELPER_CONNECTOR_DPMS_PRESENT
.dpms = drm_atomic_helper_connector_dpms,
#endif
.destroy = nv_drm_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.force = __nv_drm_connector_force,
.detect = nv_drm_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static int nv_drm_connector_get_modes(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
struct nv_drm_encoder *nv_detected_encoder =
nv_connector->nv_detected_encoder;
NvU32 modeIndex = 0;
int count = 0;
if (nv_connector->edid != NULL) {
nv_drm_connector_update_edid_property(connector, nv_connector->edid);
}
while (1) {
struct drm_display_mode *mode;
struct NvKmsKapiDisplayMode displayMode;
NvBool valid = 0;
NvBool preferredMode = NV_FALSE;
int ret;
ret = nvKms->getDisplayMode(nv_dev->pDevice,
nv_detected_encoder->hDisplay,
modeIndex++, &displayMode, &valid,
&preferredMode);
if (ret < 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to get mode at modeIndex %d of NvKmsKapiDisplay 0x%08x",
modeIndex, nv_detected_encoder->hDisplay);
break;
}
/* Is end of mode-list */
if (ret == 0) {
break;
}
/* Ignore invalid modes */
if (!valid) {
continue;
}
mode = drm_mode_create(connector->dev);
if (mode == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to create mode for NvKmsKapiDisplay 0x%08x",
nv_detected_encoder->hDisplay);
continue;
}
nvkms_display_mode_to_drm_mode(&displayMode, mode);
if (preferredMode) {
mode->type |= DRM_MODE_TYPE_PREFERRED;
}
/* Add a mode to a connector's probed_mode list */
drm_mode_probed_add(connector, mode);
count++;
}
return count;
}
static int nv_drm_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_encoder *nv_detected_encoder =
to_nv_connector(connector)->nv_detected_encoder;
struct NvKmsKapiDisplayMode displayMode;
if (nv_detected_encoder == NULL) {
return MODE_BAD;
}
drm_mode_to_nvkms_display_mode(mode, &displayMode);
if (!nvKms->validateDisplayMode(nv_dev->pDevice,
nv_detected_encoder->hDisplay,
&displayMode)) {
return MODE_BAD;
}
return MODE_OK;
}
static struct drm_encoder*
nv_drm_connector_best_encoder(struct drm_connector *connector)
{
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
if (nv_connector->nv_detected_encoder != NULL) {
return &nv_connector->nv_detected_encoder->base;
}
return NULL;
}
static const struct drm_connector_helper_funcs nv_connector_helper_funcs = {
.get_modes = nv_drm_connector_get_modes,
.mode_valid = nv_drm_connector_mode_valid,
.best_encoder = nv_drm_connector_best_encoder,
};
static struct drm_connector*
nv_drm_connector_new(struct drm_device *dev,
NvU32 physicalIndex, NvKmsConnectorType type,
NvBool internal,
char dpAddress[NVKMS_DP_ADDRESS_STRING_LENGTH])
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_connector *nv_connector = NULL;
int ret = -ENOMEM;
if ((nv_connector = nv_drm_calloc(1, sizeof(*nv_connector))) == NULL) {
goto failed;
}
if ((nv_connector->base.state =
nv_drm_calloc(1, sizeof(*nv_connector->base.state))) == NULL) {
goto failed_state_alloc;
}
nv_connector->base.state->connector = &nv_connector->base;
nv_connector->physicalIndex = physicalIndex;
nv_connector->type = type;
nv_connector->internal = internal;
strcpy(nv_connector->dpAddress, dpAddress);
ret = drm_connector_init(
dev,
&nv_connector->base, &nv_connector_funcs,
nvkms_connector_type_to_drm_connector_type(type, internal));
if (ret != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to initialize connector created from physical index %u",
nv_connector->physicalIndex);
goto failed_connector_init;
}
drm_connector_helper_add(&nv_connector->base, &nv_connector_helper_funcs);
nv_connector->base.polled = DRM_CONNECTOR_POLL_HPD;
if (nv_connector->type == NVKMS_CONNECTOR_TYPE_VGA) {
nv_connector->base.polled =
DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
}
/* Register connector with DRM subsystem */
ret = drm_connector_register(&nv_connector->base);
if (ret != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to register connector created from physical index %u",
nv_connector->physicalIndex);
goto failed_connector_register;
}
return &nv_connector->base;
failed_connector_register:
drm_connector_cleanup(&nv_connector->base);
failed_connector_init:
nv_drm_free(nv_connector->base.state);
failed_state_alloc:
nv_drm_free(nv_connector);
failed:
return ERR_PTR(ret);
}
/*
* Get connector with given physical index one exists. Otherwise, create and
* return a new connector.
*/
struct drm_connector*
nv_drm_get_connector(struct drm_device *dev,
NvU32 physicalIndex, NvKmsConnectorType type,
NvBool internal,
char dpAddress[NVKMS_DP_ADDRESS_STRING_LENGTH])
{
struct drm_connector *connector = NULL;
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
struct drm_connector_list_iter conn_iter;
nv_drm_connector_list_iter_begin(dev, &conn_iter);
#else
struct drm_mode_config *config = &dev->mode_config;
mutex_lock(&config->mutex);
#endif
/* Lookup for existing connector with same physical index */
nv_drm_for_each_connector(connector, &conn_iter, dev) {
struct nv_drm_connector *nv_connector = to_nv_connector(connector);
if (nv_connector->physicalIndex == physicalIndex) {
BUG_ON(nv_connector->type != type ||
nv_connector->internal != internal);
if (strcmp(nv_connector->dpAddress, dpAddress) == 0) {
goto done;
}
}
}
connector = NULL;
done:
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
nv_drm_connector_list_iter_end(&conn_iter);
#else
mutex_unlock(&config->mutex);
#endif
if (!connector) {
connector = nv_drm_connector_new(dev,
physicalIndex, type, internal,
dpAddress);
}
return connector;
}
#endif

View File

@@ -0,0 +1,89 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_CONNECTOR_H__
#define __NVIDIA_DRM_CONNECTOR_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_CONNECTOR_H_PRESENT)
#include <drm/drm_connector.h>
#endif
#include "nvtypes.h"
#include "nvkms-api-types.h"
struct nv_drm_connector {
NvU32 physicalIndex;
NvBool internal;
NvKmsConnectorType type;
char dpAddress[NVKMS_DP_ADDRESS_STRING_LENGTH];
struct nv_drm_encoder *nv_detected_encoder;
struct edid *edid;
atomic_t connection_status_dirty;
struct drm_connector base;
};
static inline struct nv_drm_connector *to_nv_connector(
struct drm_connector *connector)
{
if (connector == NULL) {
return NULL;
}
return container_of(connector, struct nv_drm_connector, base);
}
static inline void nv_drm_connector_mark_connection_status_dirty(
struct nv_drm_connector *nv_connector)
{
atomic_cmpxchg(&nv_connector->connection_status_dirty, false, true);
}
static inline bool nv_drm_connector_check_connection_status_dirty_and_clear(
struct nv_drm_connector *nv_connector)
{
return atomic_cmpxchg(
&nv_connector->connection_status_dirty,
true,
false) == true;
}
struct drm_connector*
nv_drm_get_connector(struct drm_device *dev,
NvU32 physicalIndex, NvKmsConnectorType type,
NvBool internal,
char dpAddress[NVKMS_DP_ADDRESS_STRING_LENGTH]);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_CONNECTOR_H__ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,296 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_CRTC_H__
#define __NVIDIA_DRM_CRTC_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-helper.h"
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#include <drm/drm_crtc.h>
#if defined(NV_DRM_ALPHA_BLENDING_AVAILABLE) || defined(NV_DRM_ROTATION_AVAILABLE)
/* For DRM_ROTATE_* , DRM_REFLECT_* */
#include <drm/drm_blend.h>
#endif
#if defined(NV_DRM_ROTATION_AVAILABLE)
/* For DRM_MODE_ROTATE_* and DRM_MODE_REFLECT_* */
#include <uapi/drm/drm_mode.h>
#endif
#include "nvtypes.h"
#include "nvkms-kapi.h"
#if defined(NV_DRM_ROTATION_AVAILABLE)
/*
* 19-05-2017 c2c446ad29437bb92b157423c632286608ebd3ec has added
* DRM_MODE_ROTATE_* and DRM_MODE_REFLECT_* to UAPI and removed
* DRM_ROTATE_* and DRM_MODE_REFLECT_*
*/
#if !defined(DRM_MODE_ROTATE_0)
#define DRM_MODE_ROTATE_0 DRM_ROTATE_0
#define DRM_MODE_ROTATE_90 DRM_ROTATE_90
#define DRM_MODE_ROTATE_180 DRM_ROTATE_180
#define DRM_MODE_ROTATE_270 DRM_ROTATE_270
#define DRM_MODE_REFLECT_X DRM_REFLECT_X
#define DRM_MODE_REFLECT_Y DRM_REFLECT_Y
#define DRM_MODE_ROTATE_MASK DRM_ROTATE_MASK
#define DRM_MODE_REFLECT_MASK DRM_REFLECT_MASK
#endif
#endif //NV_DRM_ROTATION_AVAILABLE
struct nv_drm_crtc {
NvU32 head;
/**
* @flip_list:
*
* List of flips pending to get processed by __nv_drm_handle_flip_event().
* Protected by @flip_list_lock.
*/
struct list_head flip_list;
/**
* @flip_list_lock:
*
* Spinlock to protect @flip_list.
*/
spinlock_t flip_list_lock;
struct drm_crtc base;
};
/**
* struct nv_drm_flip - flip state
*
* This state is getting used to consume DRM completion event associated
* with each crtc state from atomic commit.
*
* Function nv_drm_atomic_apply_modeset_config() consumes DRM completion
* event, save it into flip state associated with crtc and queue flip state into
* crtc's flip list and commits atomic update to hardware.
*/
struct nv_drm_flip {
/**
* @event:
*
* Optional pointer to a DRM event to signal upon completion of
* the state update.
*/
struct drm_pending_vblank_event *event;
/**
* @pending_events
*
* Number of HW events pending to signal completion of the state
* update.
*/
uint32_t pending_events;
/**
* @list_entry:
*
* Entry on the per-CRTC &nv_drm_crtc.flip_list. Protected by
* &nv_drm_crtc.flip_list_lock.
*/
struct list_head list_entry;
/**
* @deferred_flip_list
*
* List flip objects whose processing is deferred until processing of
* this flip object. Protected by &nv_drm_crtc.flip_list_lock.
* nv_drm_atomic_commit() gets last flip object from
* nv_drm_crtc:flip_list and add deferred flip objects into
* @deferred_flip_list, __nv_drm_handle_flip_event() processes
* @deferred_flip_list.
*/
struct list_head deferred_flip_list;
};
struct nv_drm_crtc_state {
/**
* @base:
*
* Base DRM crtc state object for this.
*/
struct drm_crtc_state base;
/**
* @head_req_config:
*
* Requested head's modeset configuration corresponding to this crtc state.
*/
struct NvKmsKapiHeadRequestedConfig req_config;
/**
* @nv_flip:
*
* Flip state associated with this crtc state, gets allocated
* by nv_drm_atomic_crtc_duplicate_state(), on successful commit it gets
* consumed and queued into flip list by
* nv_drm_atomic_apply_modeset_config() and finally gets destroyed
* by __nv_drm_handle_flip_event() after getting processed.
*
* In case of failure of atomic commit, this flip state getting destroyed by
* nv_drm_atomic_crtc_destroy_state().
*/
struct nv_drm_flip *nv_flip;
};
static inline struct nv_drm_crtc_state *to_nv_crtc_state(struct drm_crtc_state *state)
{
return container_of(state, struct nv_drm_crtc_state, base);
}
struct nv_drm_plane {
/**
* @base:
*
* Base DRM plane object for this plane.
*/
struct drm_plane base;
/**
* @defaultCompositionMode:
*
* Default composition blending mode of this plane.
*/
enum NvKmsCompositionBlendingMode defaultCompositionMode;
/**
* @layer_idx
*
* Index of this plane in the per head array of layers.
*/
uint32_t layer_idx;
};
static inline struct nv_drm_plane *to_nv_plane(struct drm_plane *plane)
{
if (plane == NULL) {
return NULL;
}
return container_of(plane, struct nv_drm_plane, base);
}
struct nv_drm_plane_state {
struct drm_plane_state base;
s32 __user *fd_user_ptr;
};
static inline struct nv_drm_plane_state *to_nv_drm_plane_state(struct drm_plane_state *state)
{
return container_of(state, struct nv_drm_plane_state, base);
}
static inline struct nv_drm_crtc *to_nv_crtc(struct drm_crtc *crtc)
{
if (crtc == NULL) {
return NULL;
}
return container_of(crtc, struct nv_drm_crtc, base);
}
/*
* CRTCs are static objects, list does not change once after initialization and
* before teardown of device. Initialization/teardown paths are single
* threaded, so no locking required.
*/
static inline
struct nv_drm_crtc *nv_drm_crtc_lookup(struct nv_drm_device *nv_dev, NvU32 head)
{
struct drm_crtc *crtc;
nv_drm_for_each_crtc(crtc, nv_dev->dev) {
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
if (nv_crtc->head == head) {
return nv_crtc;
}
}
return NULL;
}
/**
* nv_drm_crtc_enqueue_flip - Enqueue nv_drm_flip object to flip_list of crtc.
*/
static inline void nv_drm_crtc_enqueue_flip(struct nv_drm_crtc *nv_crtc,
struct nv_drm_flip *nv_flip)
{
spin_lock(&nv_crtc->flip_list_lock);
list_add(&nv_flip->list_entry, &nv_crtc->flip_list);
spin_unlock(&nv_crtc->flip_list_lock);
}
/**
* nv_drm_crtc_dequeue_flip - Dequeue nv_drm_flip object to flip_list of crtc.
*/
static inline
struct nv_drm_flip *nv_drm_crtc_dequeue_flip(struct nv_drm_crtc *nv_crtc)
{
struct nv_drm_flip *nv_flip = NULL;
uint32_t pending_events = 0;
spin_lock(&nv_crtc->flip_list_lock);
nv_flip = list_first_entry_or_null(&nv_crtc->flip_list,
struct nv_drm_flip, list_entry);
if (likely(nv_flip != NULL)) {
/*
* Decrement pending_event count and dequeue flip object if
* pending_event count becomes 0.
*/
pending_events = --nv_flip->pending_events;
if (!pending_events) {
list_del(&nv_flip->list_entry);
}
}
spin_unlock(&nv_crtc->flip_list_lock);
if (WARN_ON(nv_flip == NULL) || pending_events) {
return NULL;
}
return nv_flip;
}
void nv_drm_enumerate_crtcs_and_planes(
struct nv_drm_device *nv_dev,
const struct NvKmsKapiDeviceResourcesInfo *pResInfo);
int nv_drm_get_crtc_crc32_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_get_crtc_crc32_v2_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_CRTC_H__ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,36 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_DRV_H__
#define __NVIDIA_DRM_DRV_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
int nv_drm_probe_devices(void);
void nv_drm_remove_devices(void);
#endif /* defined(NV_DRM_AVAILABLE) */
#endif /* __NVIDIA_DRM_DRV_H__ */

View File

@@ -0,0 +1,352 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-priv.h"
#include "nvidia-drm-encoder.h"
#include "nvidia-drm-utils.h"
#include "nvidia-drm-connector.h"
#include "nvidia-drm-crtc.h"
#include "nvidia-drm-helper.h"
#include "nvmisc.h"
/*
* Commit fcd70cd36b9b ("drm: Split out drm_probe_helper.h")
* moves a number of helper function definitions from
* drm/drm_crtc_helper.h to a new drm_probe_helper.h.
*/
#if defined(NV_DRM_DRM_PROBE_HELPER_H_PRESENT)
#include <drm/drm_probe_helper.h>
#endif
#include <drm/drm_crtc_helper.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
static void nv_drm_encoder_destroy(struct drm_encoder *encoder)
{
struct nv_drm_encoder *nv_encoder = to_nv_encoder(encoder);
drm_encoder_cleanup(encoder);
nv_drm_free(nv_encoder);
}
static const struct drm_encoder_funcs nv_encoder_funcs = {
.destroy = nv_drm_encoder_destroy,
};
static bool nv_drm_encoder_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static void nv_drm_encoder_prepare(struct drm_encoder *encoder)
{
}
static void nv_drm_encoder_commit(struct drm_encoder *encoder)
{
}
static void nv_drm_encoder_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
}
static const struct drm_encoder_helper_funcs nv_encoder_helper_funcs = {
.mode_fixup = nv_drm_encoder_mode_fixup,
.prepare = nv_drm_encoder_prepare,
.commit = nv_drm_encoder_commit,
.mode_set = nv_drm_encoder_mode_set,
};
static uint32_t get_crtc_mask(struct drm_device *dev, uint32_t headMask)
{
struct drm_crtc *crtc = NULL;
uint32_t crtc_mask = 0x0;
nv_drm_for_each_crtc(crtc, dev) {
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
if (headMask & NVBIT(nv_crtc->head)) {
crtc_mask |= drm_crtc_mask(crtc);
}
}
return crtc_mask;
}
/*
* Helper function to create new encoder for given NvKmsKapiDisplay
* with given signal format.
*/
static struct drm_encoder*
nv_drm_encoder_new(struct drm_device *dev,
NvKmsKapiDisplay hDisplay,
NvKmsConnectorSignalFormat format,
unsigned int crtc_mask)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_encoder *nv_encoder = NULL;
int ret = 0;
/* Allocate an NVIDIA encoder object */
nv_encoder = nv_drm_calloc(1, sizeof(*nv_encoder));
if (nv_encoder == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to allocate memory for NVIDIA-DRM encoder object");
return ERR_PTR(-ENOMEM);
}
nv_encoder->hDisplay = hDisplay;
/* Initialize the base encoder object and add it to the drm subsystem */
ret = drm_encoder_init(dev,
&nv_encoder->base, &nv_encoder_funcs,
nvkms_connector_signal_to_drm_encoder_signal(format)
#if defined(NV_DRM_ENCODER_INIT_HAS_NAME_ARG)
, NULL
#endif
);
if (ret != 0) {
nv_drm_free(nv_encoder);
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to initialize encoder created from NvKmsKapiDisplay 0x%08x",
hDisplay);
return ERR_PTR(ret);
}
nv_encoder->base.possible_crtcs = crtc_mask;
drm_encoder_helper_add(&nv_encoder->base, &nv_encoder_helper_funcs);
return &nv_encoder->base;
}
/*
* Add encoder for given NvKmsKapiDisplay
*/
struct drm_encoder*
nv_drm_add_encoder(struct drm_device *dev, NvKmsKapiDisplay hDisplay)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct NvKmsKapiStaticDisplayInfo *displayInfo = NULL;
struct NvKmsKapiConnectorInfo *connectorInfo = NULL;
struct drm_encoder *encoder = NULL;
struct nv_drm_encoder *nv_encoder = NULL;
struct drm_connector *connector = NULL;
int ret = 0;
/* Query NvKmsKapiStaticDisplayInfo and NvKmsKapiConnectorInfo */
if ((displayInfo = nv_drm_calloc(1, sizeof(*displayInfo))) == NULL) {
ret = -ENOMEM;
goto done;
}
if (!nvKms->getStaticDisplayInfo(nv_dev->pDevice, hDisplay, displayInfo)) {
ret = -EINVAL;
goto done;
}
connectorInfo = nvkms_get_connector_info(nv_dev->pDevice,
displayInfo->connectorHandle);
if (IS_ERR(connectorInfo)) {
ret = PTR_ERR(connectorInfo);
goto done;
}
/* Create and add drm encoder */
encoder = nv_drm_encoder_new(dev,
displayInfo->handle,
connectorInfo->signalFormat,
get_crtc_mask(dev, connectorInfo->headMask));
if (IS_ERR(encoder)) {
ret = PTR_ERR(encoder);
goto done;
}
/* Get connector from respective physical index */
connector =
nv_drm_get_connector(dev,
connectorInfo->physicalIndex,
connectorInfo->type,
displayInfo->internal, displayInfo->dpAddress);
if (IS_ERR(connector)) {
ret = PTR_ERR(connector);
goto failed_connector_encoder_attach;
}
/* Attach encoder and connector */
ret = nv_drm_connector_attach_encoder(connector, encoder);
if (ret != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to attach encoder created from NvKmsKapiDisplay 0x%08x "
"to connector",
hDisplay);
goto failed_connector_encoder_attach;
}
nv_encoder = to_nv_encoder(encoder);
mutex_lock(&dev->mode_config.mutex);
nv_encoder->nv_connector = to_nv_connector(connector);
nv_drm_connector_mark_connection_status_dirty(nv_encoder->nv_connector);
mutex_unlock(&dev->mode_config.mutex);
goto done;
failed_connector_encoder_attach:
drm_encoder_cleanup(encoder);
nv_drm_free(encoder);
done:
nv_drm_free(displayInfo);
nv_drm_free(connectorInfo);
return ret != 0 ? ERR_PTR(ret) : encoder;
}
static inline struct nv_drm_encoder*
get_nv_encoder_from_nvkms_display(struct drm_device *dev,
NvKmsKapiDisplay hDisplay)
{
struct drm_encoder *encoder;
nv_drm_for_each_encoder(encoder, dev) {
struct nv_drm_encoder *nv_encoder = to_nv_encoder(encoder);
if (nv_encoder->hDisplay == hDisplay) {
return nv_encoder;
}
}
return NULL;
}
void nv_drm_handle_display_change(struct nv_drm_device *nv_dev,
NvKmsKapiDisplay hDisplay)
{
struct drm_device *dev = nv_dev->dev;
struct nv_drm_encoder *nv_encoder = NULL;
mutex_lock(&dev->mode_config.mutex);
nv_encoder = get_nv_encoder_from_nvkms_display(dev, hDisplay);
mutex_unlock(&dev->mode_config.mutex);
if (nv_encoder == NULL) {
return;
}
nv_drm_connector_mark_connection_status_dirty(nv_encoder->nv_connector);
drm_kms_helper_hotplug_event(dev);
}
void nv_drm_handle_dynamic_display_connected(struct nv_drm_device *nv_dev,
NvKmsKapiDisplay hDisplay)
{
struct drm_device *dev = nv_dev->dev;
struct drm_encoder *encoder = NULL;
struct nv_drm_encoder *nv_encoder = NULL;
/*
* Look for an existing encoder with the same hDisplay and
* use it if available.
*/
nv_encoder = get_nv_encoder_from_nvkms_display(dev, hDisplay);
if (nv_encoder != NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Encoder with NvKmsKapiDisplay 0x%08x already exists.",
hDisplay);
return;
}
encoder = nv_drm_add_encoder(dev, hDisplay);
if (IS_ERR(encoder)) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to add encoder for NvKmsKapiDisplay 0x%08x",
hDisplay);
return;
}
/*
* On some kernels, DRM has the notion of a "primary group" that
* tracks the global mode setting state for the device.
*
* On kernels where DRM has a primary group, we need to reinitialize
* after adding encoders and connectors.
*/
#if defined(NV_DRM_REINIT_PRIMARY_MODE_GROUP_PRESENT)
drm_reinit_primary_mode_group(dev);
#endif
drm_kms_helper_hotplug_event(dev);
}
#endif

View File

@@ -0,0 +1,68 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_ENCODER_H__
#define __NVIDIA_DRM_ENCODER_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-priv.h"
#if defined(NV_DRM_DRM_ENCODER_H_PRESENT)
#include <drm/drm_encoder.h>
#else
#include <drm/drmP.h>
#endif
#include "nvkms-kapi.h"
struct nv_drm_encoder {
NvKmsKapiDisplay hDisplay;
struct nv_drm_connector *nv_connector;
struct drm_encoder base;
};
static inline struct nv_drm_encoder *to_nv_encoder(
struct drm_encoder *encoder)
{
if (encoder == NULL) {
return NULL;
}
return container_of(encoder, struct nv_drm_encoder, base);
}
struct drm_encoder*
nv_drm_add_encoder(struct drm_device *dev, NvKmsKapiDisplay hDisplay);
void nv_drm_handle_display_change(struct nv_drm_device *nv_dev,
NvKmsKapiDisplay hDisplay);
void nv_drm_handle_dynamic_display_connected(struct nv_drm_device *nv_dev,
NvKmsKapiDisplay hDisplay);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_ENCODER_H__ */

View File

@@ -0,0 +1,257 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-priv.h"
#include "nvidia-drm-ioctl.h"
#include "nvidia-drm-fb.h"
#include "nvidia-drm-utils.h"
#include "nvidia-drm-gem.h"
#include "nvidia-drm-helper.h"
#include "nvidia-drm-format.h"
#include <drm/drm_crtc_helper.h>
static void __nv_drm_framebuffer_free(struct nv_drm_framebuffer *nv_fb)
{
uint32_t i;
/* Unreference gem object */
for (i = 0; i < ARRAY_SIZE(nv_fb->nv_gem); i++) {
if (nv_fb->nv_gem[i] != NULL) {
nv_drm_gem_object_unreference_unlocked(nv_fb->nv_gem[i]);
}
}
/* Free framebuffer */
nv_drm_free(nv_fb);
}
static void nv_drm_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct nv_drm_device *nv_dev = to_nv_device(fb->dev);
struct nv_drm_framebuffer *nv_fb = to_nv_framebuffer(fb);
/* Cleaup core framebuffer object */
drm_framebuffer_cleanup(fb);
/* Free NvKmsKapiSurface associated with this framebuffer object */
nvKms->destroySurface(nv_dev->pDevice, nv_fb->pSurface);
__nv_drm_framebuffer_free(nv_fb);
}
static int
nv_drm_framebuffer_create_handle(struct drm_framebuffer *fb,
struct drm_file *file, unsigned int *handle)
{
struct nv_drm_framebuffer *nv_fb = to_nv_framebuffer(fb);
return nv_drm_gem_handle_create(file,
nv_fb->nv_gem[0],
handle);
}
static struct drm_framebuffer_funcs nv_framebuffer_funcs = {
.destroy = nv_drm_framebuffer_destroy,
.create_handle = nv_drm_framebuffer_create_handle,
};
static struct nv_drm_framebuffer *nv_drm_framebuffer_alloc(
struct drm_device *dev,
struct drm_file *file,
struct drm_mode_fb_cmd2 *cmd)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_framebuffer *nv_fb;
const int num_planes = nv_drm_format_num_planes(cmd->pixel_format);
uint32_t i;
/* Allocate memory for the framebuffer object */
nv_fb = nv_drm_calloc(1, sizeof(*nv_fb));
if (nv_fb == NULL) {
NV_DRM_DEV_DEBUG_DRIVER(
nv_dev,
"Failed to allocate memory for framebuffer object");
return ERR_PTR(-ENOMEM);
}
if (num_planes > ARRAY_SIZE(nv_fb->nv_gem)) {
NV_DRM_DEV_DEBUG_DRIVER(nv_dev, "Unsupported number of planes");
goto failed;
}
for (i = 0; i < num_planes; i++) {
if ((nv_fb->nv_gem[i] = nv_drm_gem_object_lookup(
dev,
file,
cmd->handles[i])) == NULL) {
NV_DRM_DEV_DEBUG_DRIVER(
nv_dev,
"Failed to find gem object of type nvkms memory");
goto failed;
}
}
return nv_fb;
failed:
__nv_drm_framebuffer_free(nv_fb);
return ERR_PTR(-ENOENT);
}
static int nv_drm_framebuffer_init(struct drm_device *dev,
struct nv_drm_framebuffer *nv_fb,
enum NvKmsSurfaceMemoryFormat format,
bool have_modifier,
uint64_t modifier)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct NvKmsKapiCreateSurfaceParams params = { };
uint32_t i;
int ret;
/* Initialize the base framebuffer object and add it to drm subsystem */
ret = drm_framebuffer_init(dev, &nv_fb->base, &nv_framebuffer_funcs);
if (ret != 0) {
NV_DRM_DEV_DEBUG_DRIVER(
nv_dev,
"Failed to initialize framebuffer object");
return ret;
}
for (i = 0; i < ARRAY_SIZE(nv_fb->nv_gem); i++) {
if (nv_fb->nv_gem[i] != NULL) {
params.planes[i].memory = nv_fb->nv_gem[i]->pMemory;
params.planes[i].offset = nv_fb->base.offsets[i];
params.planes[i].pitch = nv_fb->base.pitches[i];
}
}
params.height = nv_fb->base.height;
params.width = nv_fb->base.width;
params.format = format;
if (have_modifier) {
params.explicit_layout = true;
params.layout = (modifier & 0x10) ?
NvKmsSurfaceMemoryLayoutBlockLinear :
NvKmsSurfaceMemoryLayoutPitch;
params.log2GobsPerBlockY = modifier & 0xf;
} else {
params.explicit_layout = false;
}
/* Create NvKmsKapiSurface */
nv_fb->pSurface = nvKms->createSurface(nv_dev->pDevice, &params);
if (nv_fb->pSurface == NULL) {
NV_DRM_DEV_DEBUG_DRIVER(nv_dev, "Failed to create NvKmsKapiSurface");
drm_framebuffer_cleanup(&nv_fb->base);
return -EINVAL;
}
return 0;
}
struct drm_framebuffer *nv_drm_internal_framebuffer_create(
struct drm_device *dev,
struct drm_file *file,
struct drm_mode_fb_cmd2 *cmd)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_framebuffer *nv_fb;
uint64_t modifier = 0;
int ret;
enum NvKmsSurfaceMemoryFormat format;
#if defined(NV_DRM_FORMAT_MODIFIERS_PRESENT)
int i;
#endif
bool have_modifier = false;
/* Check whether NvKms supports the given pixel format */
if (!nv_drm_format_to_nvkms_format(cmd->pixel_format, &format)) {
NV_DRM_DEV_DEBUG_DRIVER(
nv_dev,
"Unsupported drm pixel format 0x%08x", cmd->pixel_format);
return ERR_PTR(-EINVAL);
}
#if defined(NV_DRM_FORMAT_MODIFIERS_PRESENT)
if (cmd->flags & DRM_MODE_FB_MODIFIERS) {
have_modifier = true;
modifier = cmd->modifier[0];
for (i = 0; nv_dev->modifiers[i] != DRM_FORMAT_MOD_INVALID; i++) {
if (nv_dev->modifiers[i] == modifier) {
break;
}
}
if (nv_dev->modifiers[i] == DRM_FORMAT_MOD_INVALID) {
NV_DRM_DEV_DEBUG_DRIVER(
nv_dev,
"Invalid format modifier for framebuffer object: 0x%016llx",
modifier);
return ERR_PTR(-EINVAL);
}
}
#endif
nv_fb = nv_drm_framebuffer_alloc(dev, file, cmd);
if (IS_ERR(nv_fb)) {
return (struct drm_framebuffer *)nv_fb;
}
/* Fill out framebuffer metadata from the userspace fb creation request */
drm_helper_mode_fill_fb_struct(
#if defined(NV_DRM_HELPER_MODE_FILL_FB_STRUCT_HAS_DEV_ARG)
dev,
#endif
&nv_fb->base,
cmd);
/*
* Finish up FB initialization by creating the backing NVKMS surface and
* publishing the DRM fb
*/
ret = nv_drm_framebuffer_init(dev, nv_fb, format, have_modifier, modifier);
if (ret != 0) {
__nv_drm_framebuffer_free(nv_fb);
return ERR_PTR(ret);
}
return &nv_fb->base;
}
#endif

View File

@@ -0,0 +1,66 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_FB_H__
#define __NVIDIA_DRM_FB_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_FRAMEBUFFER_H_PRESENT)
#include <drm/drm_framebuffer.h>
#endif
#include "nvidia-drm-gem-nvkms-memory.h"
#include "nvkms-kapi.h"
struct nv_drm_framebuffer {
struct NvKmsKapiSurface *pSurface;
struct nv_drm_gem_object*
nv_gem[NVKMS_MAX_PLANES_PER_SURFACE];
struct drm_framebuffer base;
};
static inline struct nv_drm_framebuffer *to_nv_framebuffer(
struct drm_framebuffer *fb)
{
if (fb == NULL) {
return NULL;
}
return container_of(fb, struct nv_drm_framebuffer, base);
}
struct drm_framebuffer *nv_drm_internal_framebuffer_create(
struct drm_device *dev,
struct drm_file *file,
struct drm_mode_fb_cmd2 *cmd);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_FB_H__ */

View File

@@ -0,0 +1,162 @@
/*
* Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#include <linux/kernel.h>
#include <linux/bitmap.h>
#include "nvidia-drm-format.h"
#include "nvidia-drm-os-interface.h"
static const u32 nvkms_to_drm_format[] = {
/* RGB formats */
[NvKmsSurfaceMemoryFormatA1R5G5B5] = DRM_FORMAT_ARGB1555,
[NvKmsSurfaceMemoryFormatX1R5G5B5] = DRM_FORMAT_XRGB1555,
[NvKmsSurfaceMemoryFormatR5G6B5] = DRM_FORMAT_RGB565,
[NvKmsSurfaceMemoryFormatA8R8G8B8] = DRM_FORMAT_ARGB8888,
[NvKmsSurfaceMemoryFormatX8R8G8B8] = DRM_FORMAT_XRGB8888,
[NvKmsSurfaceMemoryFormatA2B10G10R10] = DRM_FORMAT_ABGR2101010,
[NvKmsSurfaceMemoryFormatX2B10G10R10] = DRM_FORMAT_XBGR2101010,
[NvKmsSurfaceMemoryFormatA8B8G8R8] = DRM_FORMAT_ABGR8888,
[NvKmsSurfaceMemoryFormatY8_U8__Y8_V8_N422] = DRM_FORMAT_YUYV,
[NvKmsSurfaceMemoryFormatU8_Y8__V8_Y8_N422] = DRM_FORMAT_UYVY,
/* YUV semi-planar formats
*
* NVKMS YUV semi-planar formats are MSB aligned. Yx__UxVx means
* that the UV components are packed like UUUUUVVVVV (MSB to LSB)
* and Yx_VxUx means VVVVVUUUUU (MSB to LSB).
*/
/*
* 2 plane YCbCr
* index 0 = Y plane, [7:0] Y
* index 1 = Cr:Cb plane, [15:0] Cr:Cb little endian
* or
* index 1 = Cb:Cr plane, [15:0] Cb:Cr little endian
*/
[NvKmsSurfaceMemoryFormatY8___V8U8_N444] = DRM_FORMAT_NV24, /* non-subsampled Cr:Cb plane */
[NvKmsSurfaceMemoryFormatY8___U8V8_N444] = DRM_FORMAT_NV42, /* non-subsampled Cb:Cr plane */
[NvKmsSurfaceMemoryFormatY8___V8U8_N422] = DRM_FORMAT_NV16, /* 2x1 subsampled Cr:Cb plane */
[NvKmsSurfaceMemoryFormatY8___U8V8_N422] = DRM_FORMAT_NV61, /* 2x1 subsampled Cb:Cr plane */
[NvKmsSurfaceMemoryFormatY8___V8U8_N420] = DRM_FORMAT_NV12, /* 2x2 subsampled Cr:Cb plane */
[NvKmsSurfaceMemoryFormatY8___U8V8_N420] = DRM_FORMAT_NV21, /* 2x2 subsampled Cb:Cr plane */
#if defined(DRM_FORMAT_P210)
/*
* 2 plane YCbCr MSB aligned
* index 0 = Y plane, [15:0] Y:x [10:6] little endian
* index 1 = Cr:Cb plane, [31:0] Cr:x:Cb:x [10:6:10:6] little endian
*
* 2x1 subsampled Cr:Cb plane, 10 bit per channel
*/
[NvKmsSurfaceMemoryFormatY10___V10U10_N422] = DRM_FORMAT_P210,
#endif
#if defined(DRM_FORMAT_P010)
/*
* 2 plane YCbCr MSB aligned
* index 0 = Y plane, [15:0] Y:x [10:6] little endian
* index 1 = Cr:Cb plane, [31:0] Cr:x:Cb:x [10:6:10:6] little endian
*
* 2x2 subsampled Cr:Cb plane 10 bits per channel
*/
[NvKmsSurfaceMemoryFormatY10___V10U10_N420] = DRM_FORMAT_P010,
#endif
#if defined(DRM_FORMAT_P012)
/*
* 2 plane YCbCr MSB aligned
* index 0 = Y plane, [15:0] Y:x [12:4] little endian
* index 1 = Cr:Cb plane, [31:0] Cr:x:Cb:x [12:4:12:4] little endian
*
* 2x2 subsampled Cr:Cb plane 12 bits per channel
*/
[NvKmsSurfaceMemoryFormatY12___V12U12_N420] = DRM_FORMAT_P012,
#endif
};
bool nv_drm_format_to_nvkms_format(u32 format,
enum NvKmsSurfaceMemoryFormat *nvkms_format)
{
enum NvKmsSurfaceMemoryFormat i;
for (i = 0; i < ARRAY_SIZE(nvkms_to_drm_format); i++) {
/*
* Note nvkms_to_drm_format[] is sparsely populated: it doesn't
* handle all NvKmsSurfaceMemoryFormat values, so be sure to skip 0
* entries when iterating through it.
*/
if (nvkms_to_drm_format[i] != 0 && nvkms_to_drm_format[i] == format) {
*nvkms_format = i;
return true;
}
}
return false;
}
uint32_t *nv_drm_format_array_alloc(
unsigned int *count,
const long unsigned int nvkms_format_mask)
{
enum NvKmsSurfaceMemoryFormat i;
unsigned int max_count = hweight64(nvkms_format_mask);
uint32_t *array = nv_drm_calloc(1, sizeof(uint32_t) * max_count);
if (array == NULL) {
return NULL;
}
*count = 0;
for_each_set_bit(i, &nvkms_format_mask,
sizeof(nvkms_format_mask) * BITS_PER_BYTE) {
if (i >= ARRAY_SIZE(nvkms_to_drm_format)) {
break;
}
/*
* Note nvkms_to_drm_format[] is sparsely populated: it doesn't
* handle all NvKmsSurfaceMemoryFormat values, so be sure to skip 0
* entries when iterating through it.
*/
if (nvkms_to_drm_format[i] == 0) {
continue;
}
array[(*count)++] = nvkms_to_drm_format[i];
}
if (*count == 0) {
nv_drm_free(array);
return NULL;
}
return array;
}
#endif

View File

@@ -0,0 +1,43 @@
/*
* Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_FORMAT_H__
#define __NVIDIA_DRM_FORMAT_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include <drm/drm_fourcc.h>
#include "nvkms-format.h"
bool nv_drm_format_to_nvkms_format(u32 format,
enum NvKmsSurfaceMemoryFormat *nvkms_format);
uint32_t *nv_drm_format_array_alloc(
unsigned int *count,
const long unsigned int nvkms_format_mask);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_FORMAT_H__ */

View File

@@ -0,0 +1,228 @@
/*
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRM_PRIME_H_PRESENT)
#include <drm/drm_prime.h>
#endif
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_DRV_H_PRESENT)
#include <drm/drm_drv.h>
#endif
#include "nvidia-drm-gem-dma-buf.h"
#include "nvidia-drm-ioctl.h"
#include "linux/dma-buf.h"
static inline
void __nv_drm_gem_dma_buf_free(struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_device *nv_dev = nv_gem->nv_dev;
struct nv_drm_gem_dma_buf *nv_dma_buf = to_nv_dma_buf(nv_gem);
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
if (nv_dma_buf->base.pMemory) {
/* Free NvKmsKapiMemory handle associated with this gem object */
nvKms->freeMemory(nv_dev->pDevice, nv_dma_buf->base.pMemory);
}
#endif
drm_prime_gem_destroy(&nv_gem->base, nv_dma_buf->sgt);
nv_drm_free(nv_dma_buf);
}
static int __nv_drm_gem_dma_buf_create_mmap_offset(
struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
uint64_t *offset)
{
(void)nv_dev;
return nv_drm_gem_create_mmap_offset(nv_gem, offset);
}
static int __nv_drm_gem_dma_buf_mmap(struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma)
{
struct dma_buf_attachment *attach = nv_gem->base.import_attach;
struct dma_buf *dma_buf = attach->dmabuf;
struct file *old_file;
int ret;
/* check if buffer supports mmap */
if (!dma_buf->file->f_op->mmap)
return -EINVAL;
/* readjust the vma */
get_file(dma_buf->file);
old_file = vma->vm_file;
vma->vm_file = dma_buf->file;
vma->vm_pgoff -= drm_vma_node_start(&nv_gem->base.vma_node);;
ret = dma_buf->file->f_op->mmap(dma_buf->file, vma);
if (ret) {
/* restore old parameters on failure */
vma->vm_file = old_file;
fput(dma_buf->file);
} else {
if (old_file)
fput(old_file);
}
return ret;
}
const struct nv_drm_gem_object_funcs __nv_gem_dma_buf_ops = {
.free = __nv_drm_gem_dma_buf_free,
.create_mmap_offset = __nv_drm_gem_dma_buf_create_mmap_offset,
.mmap = __nv_drm_gem_dma_buf_mmap,
};
struct drm_gem_object*
nv_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct dma_buf *dma_buf = attach->dmabuf;
struct nv_drm_gem_dma_buf *nv_dma_buf;
struct NvKmsKapiMemory *pMemory;
if ((nv_dma_buf =
nv_drm_calloc(1, sizeof(*nv_dma_buf))) == NULL) {
return NULL;
}
// dma_buf->size must be a multiple of PAGE_SIZE
BUG_ON(dma_buf->size % PAGE_SIZE);
pMemory = NULL;
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
pMemory = nvKms->getSystemMemoryHandleFromDmaBuf(nv_dev->pDevice,
(NvP64)(NvUPtr)dma_buf,
dma_buf->size - 1);
}
#endif
nv_drm_gem_object_init(nv_dev, &nv_dma_buf->base,
&__nv_gem_dma_buf_ops, dma_buf->size, pMemory);
nv_dma_buf->sgt = sgt;
return &nv_dma_buf->base.base;
}
int nv_drm_gem_export_dmabuf_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_export_dmabuf_memory_params *p = data;
struct nv_drm_gem_dma_buf *nv_dma_buf = NULL;
int ret = 0;
struct NvKmsKapiMemory *pTmpMemory = NULL;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL;
goto done;
}
if (p->__pad != 0) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
goto done;
}
if ((nv_dma_buf = nv_drm_gem_object_dma_buf_lookup(
dev, filep, p->handle)) == NULL) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup DMA-BUF GEM object for export: 0x%08x",
p->handle);
goto done;
}
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
if (!nv_dma_buf->base.pMemory) {
/*
* Get RM system memory handle from SGT - RM will take a reference
* on this GEM object to prevent the DMA-BUF from being unpinned
* prematurely.
*/
pTmpMemory = nvKms->getSystemMemoryHandleFromSgt(
nv_dev->pDevice,
(NvP64)(NvUPtr)nv_dma_buf->sgt,
(NvP64)(NvUPtr)&nv_dma_buf->base.base,
nv_dma_buf->base.base.size - 1);
}
}
#endif
if (!nv_dma_buf->base.pMemory && !pTmpMemory) {
ret = -ENOMEM;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to get memory to export from DMA-BUF GEM object: 0x%08x",
p->handle);
goto done;
}
if (!nvKms->exportMemory(nv_dev->pDevice,
nv_dma_buf->base.pMemory ?
nv_dma_buf->base.pMemory : pTmpMemory,
p->nvkms_params_ptr,
p->nvkms_params_size)) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to export memory from DMA-BUF GEM object: 0x%08x",
p->handle);
goto done;
}
done:
if (pTmpMemory) {
/*
* Release reference on RM system memory to prevent circular
* refcounting. Another refcount will still be held by RM FD.
*/
nvKms->freeMemory(nv_dev->pDevice, pTmpMemory);
}
if (nv_dma_buf != NULL) {
nv_drm_gem_object_unreference_unlocked(&nv_dma_buf->base);
}
return ret;
}
#endif

View File

@@ -0,0 +1,76 @@
/*
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_GEM_DMA_BUF_H__
#define __NVIDIA_DRM_GEM_DMA_BUF_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#include "nvidia-drm-gem.h"
struct nv_drm_gem_dma_buf {
struct nv_drm_gem_object base;
struct sg_table *sgt;
};
extern const struct nv_drm_gem_object_funcs __nv_gem_dma_buf_ops;
static inline struct nv_drm_gem_dma_buf *to_nv_dma_buf(
struct nv_drm_gem_object *nv_gem)
{
if (nv_gem != NULL) {
return container_of(nv_gem, struct nv_drm_gem_dma_buf, base);
}
return NULL;
}
static inline
struct nv_drm_gem_dma_buf *nv_drm_gem_object_dma_buf_lookup(
struct drm_device *dev,
struct drm_file *filp,
u32 handle)
{
struct nv_drm_gem_object *nv_gem =
nv_drm_gem_object_lookup(dev, filp, handle);
if (nv_gem != NULL && nv_gem->ops != &__nv_gem_dma_buf_ops) {
nv_drm_gem_object_unreference_unlocked(nv_gem);
return NULL;
}
return to_nv_dma_buf(nv_gem);
}
struct drm_gem_object*
nv_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
int nv_drm_gem_export_dmabuf_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
#endif
#endif /* __NVIDIA_DRM_GEM_DMA_BUF_H__ */

View File

@@ -0,0 +1,585 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-gem-nvkms-memory.h"
#include "nvidia-drm-helper.h"
#include "nvidia-drm-ioctl.h"
#if defined(NV_DRM_DRM_DRV_H_PRESENT)
#include <drm/drm_drv.h>
#endif
#if defined(NV_DRM_DRM_PRIME_H_PRESENT)
#include <drm/drm_prime.h>
#endif
#include <linux/io.h>
#include "nv-mm.h"
static void __nv_drm_gem_nvkms_memory_free(struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_device *nv_dev = nv_gem->nv_dev;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory =
to_nv_nvkms_memory(nv_gem);
if (nv_nvkms_memory->physically_mapped) {
if (nv_nvkms_memory->pWriteCombinedIORemapAddress != NULL) {
iounmap(nv_nvkms_memory->pWriteCombinedIORemapAddress);
}
nvKms->unmapMemory(nv_dev->pDevice,
nv_nvkms_memory->base.pMemory,
NVKMS_KAPI_MAPPING_TYPE_USER,
nv_nvkms_memory->pPhysicalAddress);
}
if (nv_nvkms_memory->pages_count != 0) {
nvKms->freeMemoryPages((NvU64 *)nv_nvkms_memory->pages);
}
/* Free NvKmsKapiMemory handle associated with this gem object */
nvKms->freeMemory(nv_dev->pDevice, nv_nvkms_memory->base.pMemory);
nv_drm_free(nv_nvkms_memory);
}
static int __nv_drm_gem_nvkms_mmap(struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma)
{
return drm_gem_mmap_obj(&nv_gem->base,
drm_vma_node_size(&nv_gem->base.vma_node) << PAGE_SHIFT, vma);
}
static vm_fault_t __nv_drm_gem_nvkms_handle_vma_fault(
struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma,
struct vm_fault *vmf)
{
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory =
to_nv_nvkms_memory(nv_gem);
unsigned long address = nv_page_fault_va(vmf);
struct drm_gem_object *gem = vma->vm_private_data;
unsigned long page_offset, pfn;
vm_fault_t ret;
page_offset = vmf->pgoff - drm_vma_node_start(&gem->vma_node);
if (nv_nvkms_memory->pages_count == 0) {
pfn = (unsigned long)(uintptr_t)nv_nvkms_memory->pPhysicalAddress;
pfn >>= PAGE_SHIFT;
pfn += page_offset;
} else {
BUG_ON(page_offset > nv_nvkms_memory->pages_count);
pfn = page_to_pfn(nv_nvkms_memory->pages[page_offset]);
}
#if defined(NV_VMF_INSERT_PFN_PRESENT)
ret = vmf_insert_pfn(vma, address, pfn);
#else
ret = vm_insert_pfn(vma, address, pfn);
switch (ret) {
case 0:
case -EBUSY:
/*
* EBUSY indicates that another thread already handled
* the faulted range.
*/
ret = VM_FAULT_NOPAGE;
break;
case -ENOMEM:
ret = VM_FAULT_OOM;
break;
default:
WARN_ONCE(1, "Unhandled error in %s: %d\n", __FUNCTION__, ret);
ret = VM_FAULT_SIGBUS;
break;
}
#endif /* defined(NV_VMF_INSERT_PFN_PRESENT) */
return ret;
#endif /* defined(NV_DRM_ATOMIC_MODESET_AVAILABLE) */
return VM_FAULT_SIGBUS;
}
static struct drm_gem_object *__nv_drm_gem_nvkms_prime_dup(
struct drm_device *dev,
const struct nv_drm_gem_object *nv_gem_src);
static int __nv_drm_gem_nvkms_map(
struct nv_drm_device *nv_dev,
struct NvKmsKapiMemory *pMemory,
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory,
uint64_t size)
{
if (!nv_dev->hasVideoMemory) {
return 0;
}
if (!nvKms->mapMemory(nv_dev->pDevice,
pMemory,
NVKMS_KAPI_MAPPING_TYPE_USER,
&nv_nvkms_memory->pPhysicalAddress)) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to map NvKmsKapiMemory 0x%p",
pMemory);
return -ENOMEM;
}
nv_nvkms_memory->pWriteCombinedIORemapAddress = ioremap_wc(
(uintptr_t)nv_nvkms_memory->pPhysicalAddress,
size);
if (!nv_nvkms_memory->pWriteCombinedIORemapAddress) {
NV_DRM_DEV_LOG_INFO(
nv_dev,
"Failed to ioremap_wc NvKmsKapiMemory 0x%p",
pMemory);
}
nv_nvkms_memory->physically_mapped = true;
return 0;
}
static int __nv_drm_gem_map_nvkms_memory_offset(
struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
uint64_t *offset)
{
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory =
to_nv_nvkms_memory(nv_gem);
if (!nv_nvkms_memory->physically_mapped) {
int ret = __nv_drm_gem_nvkms_map(nv_dev,
nv_nvkms_memory->base.pMemory,
nv_nvkms_memory,
nv_nvkms_memory->base.base.size);
if (ret) {
return ret;
}
}
return nv_drm_gem_create_mmap_offset(&nv_nvkms_memory->base, offset);
}
static struct sg_table *__nv_drm_gem_nvkms_memory_prime_get_sg_table(
struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_device *nv_dev = nv_gem->nv_dev;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory =
to_nv_nvkms_memory(nv_gem);
struct sg_table *sg_table;
if (nv_nvkms_memory->pages_count == 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Cannot create sg_table for NvKmsKapiMemory 0x%p",
nv_gem->pMemory);
return NULL;
}
sg_table = nv_drm_prime_pages_to_sg(nv_dev->dev,
nv_nvkms_memory->pages,
nv_nvkms_memory->pages_count);
return sg_table;
}
const struct nv_drm_gem_object_funcs nv_gem_nvkms_memory_ops = {
.free = __nv_drm_gem_nvkms_memory_free,
.prime_dup = __nv_drm_gem_nvkms_prime_dup,
.mmap = __nv_drm_gem_nvkms_mmap,
.handle_vma_fault = __nv_drm_gem_nvkms_handle_vma_fault,
.create_mmap_offset = __nv_drm_gem_map_nvkms_memory_offset,
.prime_get_sg_table = __nv_drm_gem_nvkms_memory_prime_get_sg_table,
};
static int __nv_drm_nvkms_gem_obj_init(
struct nv_drm_device *nv_dev,
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory,
struct NvKmsKapiMemory *pMemory,
uint64_t size)
{
NvU64 *pages = NULL;
NvU32 numPages = 0;
nv_nvkms_memory->pPhysicalAddress = NULL;
nv_nvkms_memory->pWriteCombinedIORemapAddress = NULL;
nv_nvkms_memory->physically_mapped = false;
if (!nvKms->getMemoryPages(nv_dev->pDevice,
pMemory,
&pages,
&numPages) &&
!nv_dev->hasVideoMemory) {
/* GetMemoryPages may fail for vidmem allocations,
* but it should not fail for sysmem allocations. */
NV_DRM_DEV_LOG_ERR(nv_dev,
"Failed to get memory pages for NvKmsKapiMemory 0x%p",
pMemory);
return -ENOMEM;
}
nv_nvkms_memory->pages_count = numPages;
nv_nvkms_memory->pages = (struct page **)pages;
nv_drm_gem_object_init(nv_dev,
&nv_nvkms_memory->base,
&nv_gem_nvkms_memory_ops,
size,
pMemory);
return 0;
}
int nv_drm_dumb_create(
struct drm_file *file_priv,
struct drm_device *dev, struct drm_mode_create_dumb *args)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory;
uint8_t compressible = 0;
struct NvKmsKapiMemory *pMemory;
int ret = 0;
args->pitch = roundup(args->width * ((args->bpp + 7) >> 3),
nv_dev->pitchAlignment);
args->size = args->height * args->pitch;
/* Core DRM requires gem object size to be aligned with PAGE_SIZE */
args->size = roundup(args->size, PAGE_SIZE);
if ((nv_nvkms_memory =
nv_drm_calloc(1, sizeof(*nv_nvkms_memory))) == NULL) {
ret = -ENOMEM;
goto fail;
}
if (nv_dev->hasVideoMemory) {
pMemory = nvKms->allocateVideoMemory(nv_dev->pDevice,
NvKmsSurfaceMemoryLayoutPitch,
args->size,
&compressible);
} else {
pMemory = nvKms->allocateSystemMemory(nv_dev->pDevice,
NvKmsSurfaceMemoryLayoutPitch,
args->size,
&compressible);
}
if (pMemory == NULL) {
ret = -ENOMEM;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to allocate NvKmsKapiMemory for dumb object of size %llu",
args->size);
goto nvkms_alloc_memory_failed;
}
ret = __nv_drm_nvkms_gem_obj_init(nv_dev, nv_nvkms_memory, pMemory, args->size);
if (ret) {
goto nvkms_gem_obj_init_failed;
}
/* Always map dumb buffer memory up front. Clients are only expected
* to use dumb buffers for software rendering, so they're not much use
* without a CPU mapping.
*/
ret = __nv_drm_gem_nvkms_map(nv_dev, pMemory, nv_nvkms_memory, args->size);
if (ret) {
nv_drm_gem_object_unreference_unlocked(&nv_nvkms_memory->base);
goto fail;
}
return nv_drm_gem_handle_create_drop_reference(file_priv,
&nv_nvkms_memory->base,
&args->handle);
nvkms_gem_obj_init_failed:
nvKms->freeMemory(nv_dev->pDevice, pMemory);
nvkms_alloc_memory_failed:
nv_drm_free(nv_nvkms_memory);
fail:
return ret;
}
int nv_drm_gem_import_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_import_nvkms_memory_params *p = data;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory;
struct NvKmsKapiMemory *pMemory;
int ret;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL;
goto failed;
}
if ((nv_nvkms_memory =
nv_drm_calloc(1, sizeof(*nv_nvkms_memory))) == NULL) {
ret = -ENOMEM;
goto failed;
}
pMemory = nvKms->importMemory(nv_dev->pDevice,
p->mem_size,
p->nvkms_params_ptr,
p->nvkms_params_size);
if (pMemory == NULL) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to import NVKMS memory to GEM object");
goto nvkms_import_memory_failed;
}
ret = __nv_drm_nvkms_gem_obj_init(nv_dev, nv_nvkms_memory, pMemory, p->mem_size);
if (ret) {
goto nvkms_gem_obj_init_failed;
}
return nv_drm_gem_handle_create_drop_reference(filep,
&nv_nvkms_memory->base,
&p->handle);
nvkms_gem_obj_init_failed:
nvKms->freeMemory(nv_dev->pDevice, pMemory);
nvkms_import_memory_failed:
nv_drm_free(nv_nvkms_memory);
failed:
return ret;
}
int nv_drm_gem_export_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_export_nvkms_memory_params *p = data;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory = NULL;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL;
goto done;
}
if (p->__pad != 0) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
goto done;
}
if ((nv_nvkms_memory = nv_drm_gem_object_nvkms_memory_lookup(
dev,
filep,
p->handle)) == NULL) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup NVKMS gem object for export: 0x%08x",
p->handle);
goto done;
}
if (!nvKms->exportMemory(nv_dev->pDevice,
nv_nvkms_memory->base.pMemory,
p->nvkms_params_ptr,
p->nvkms_params_size)) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to export memory from NVKMS GEM object: 0x%08x", p->handle);
goto done;
}
done:
if (nv_nvkms_memory != NULL) {
nv_drm_gem_object_unreference_unlocked(&nv_nvkms_memory->base);
}
return ret;
}
int nv_drm_gem_alloc_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_alloc_nvkms_memory_params *p = data;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory = NULL;
struct NvKmsKapiMemory *pMemory;
enum NvKmsSurfaceMemoryLayout layout;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL;
goto failed;
}
if (p->__pad != 0) {
NV_DRM_DEV_LOG_ERR(nv_dev, "non-zero value in padding field");
goto failed;
}
if ((nv_nvkms_memory =
nv_drm_calloc(1, sizeof(*nv_nvkms_memory))) == NULL) {
ret = -ENOMEM;
goto failed;
}
layout = p->block_linear ?
NvKmsSurfaceMemoryLayoutBlockLinear : NvKmsSurfaceMemoryLayoutPitch;
if (nv_dev->hasVideoMemory) {
pMemory = nvKms->allocateVideoMemory(nv_dev->pDevice,
layout,
p->memory_size,
&p->compressible);
} else {
pMemory = nvKms->allocateSystemMemory(nv_dev->pDevice,
layout,
p->memory_size,
&p->compressible);
}
if (pMemory == NULL) {
ret = -EINVAL;
NV_DRM_DEV_LOG_ERR(nv_dev,
"Failed to allocate NVKMS memory for GEM object");
goto nvkms_alloc_memory_failed;
}
ret = __nv_drm_nvkms_gem_obj_init(nv_dev, nv_nvkms_memory, pMemory,
p->memory_size);
if (ret) {
goto nvkms_gem_obj_init_failed;
}
return nv_drm_gem_handle_create_drop_reference(filep,
&nv_nvkms_memory->base,
&p->handle);
nvkms_gem_obj_init_failed:
nvKms->freeMemory(nv_dev->pDevice, pMemory);
nvkms_alloc_memory_failed:
nv_drm_free(nv_nvkms_memory);
failed:
return ret;
}
static struct drm_gem_object *__nv_drm_gem_nvkms_prime_dup(
struct drm_device *dev,
const struct nv_drm_gem_object *nv_gem_src)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
const struct nv_drm_device *nv_dev_src;
const struct nv_drm_gem_nvkms_memory *nv_nvkms_memory_src;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory;
struct NvKmsKapiMemory *pMemory;
BUG_ON(nv_gem_src == NULL || nv_gem_src->ops != &nv_gem_nvkms_memory_ops);
nv_dev_src = to_nv_device(nv_gem_src->base.dev);
nv_nvkms_memory_src = to_nv_nvkms_memory_const(nv_gem_src);
if ((nv_nvkms_memory =
nv_drm_calloc(1, sizeof(*nv_nvkms_memory))) == NULL) {
return NULL;
}
pMemory = nvKms->dupMemory(nv_dev->pDevice,
nv_dev_src->pDevice, nv_gem_src->pMemory);
if (pMemory == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to import NVKMS memory to GEM object");
goto nvkms_dup_memory_failed;
}
if (__nv_drm_nvkms_gem_obj_init(nv_dev,
nv_nvkms_memory,
pMemory,
nv_gem_src->base.size)) {
goto nvkms_gem_obj_init_failed;
}
return &nv_nvkms_memory->base.base;
nvkms_gem_obj_init_failed:
nvKms->freeMemory(nv_dev->pDevice, pMemory);
nvkms_dup_memory_failed:
nv_drm_free(nv_nvkms_memory);
return NULL;
}
int nv_drm_dumb_map_offset(struct drm_file *file,
struct drm_device *dev, uint32_t handle,
uint64_t *offset)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory;
int ret = -EINVAL;
if ((nv_nvkms_memory = nv_drm_gem_object_nvkms_memory_lookup(
dev,
file,
handle)) == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup gem object for mapping: 0x%08x",
handle);
return ret;
}
ret = __nv_drm_gem_map_nvkms_memory_offset(nv_dev,
&nv_nvkms_memory->base, offset);
nv_drm_gem_object_unreference_unlocked(&nv_nvkms_memory->base);
return ret;
}
int nv_drm_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle)
{
return drm_gem_handle_delete(file, handle);
}
#endif

View File

@@ -0,0 +1,110 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_GEM_NVKMS_MEMORY_H__
#define __NVIDIA_DRM_GEM_NVKMS_MEMORY_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-gem.h"
struct nv_drm_gem_nvkms_memory {
struct nv_drm_gem_object base;
bool physically_mapped;
void *pPhysicalAddress;
void *pWriteCombinedIORemapAddress;
struct page **pages;
unsigned long pages_count;
};
extern const struct nv_drm_gem_object_funcs nv_gem_nvkms_memory_ops;
static inline struct nv_drm_gem_nvkms_memory *to_nv_nvkms_memory(
struct nv_drm_gem_object *nv_gem)
{
if (nv_gem != NULL) {
return container_of(nv_gem, struct nv_drm_gem_nvkms_memory, base);
}
return NULL;
}
static inline struct nv_drm_gem_nvkms_memory *to_nv_nvkms_memory_const(
const struct nv_drm_gem_object *nv_gem)
{
if (nv_gem != NULL) {
return container_of(nv_gem, struct nv_drm_gem_nvkms_memory, base);
}
return NULL;
}
static inline
struct nv_drm_gem_nvkms_memory *nv_drm_gem_object_nvkms_memory_lookup(
struct drm_device *dev,
struct drm_file *filp,
u32 handle)
{
struct nv_drm_gem_object *nv_gem =
nv_drm_gem_object_lookup(dev, filp, handle);
if (nv_gem != NULL && nv_gem->ops != &nv_gem_nvkms_memory_ops) {
nv_drm_gem_object_unreference_unlocked(nv_gem);
return NULL;
}
return to_nv_nvkms_memory(nv_gem);
}
int nv_drm_dumb_create(
struct drm_file *file_priv,
struct drm_device *dev, struct drm_mode_create_dumb *args);
int nv_drm_gem_import_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_gem_export_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_gem_alloc_nvkms_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_dumb_map_offset(struct drm_file *file,
struct drm_device *dev, uint32_t handle,
uint64_t *offset);
int nv_drm_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle);
struct drm_gem_object *nv_drm_gem_nvkms_prime_import(
struct drm_device *dev,
struct drm_gem_object *gem);
#endif
#endif /* __NVIDIA_DRM_GEM_NVKMS_MEMORY_H__ */

View File

@@ -0,0 +1,217 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRM_PRIME_H_PRESENT)
#include <drm/drm_prime.h>
#endif
#include "nvidia-drm-gem-user-memory.h"
#include "nvidia-drm-helper.h"
#include "nvidia-drm-ioctl.h"
#include "linux/dma-buf.h"
#include "linux/mm.h"
#include "nv-mm.h"
static inline
void __nv_drm_gem_user_memory_free(struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_gem_user_memory *nv_user_memory = to_nv_user_memory(nv_gem);
nv_drm_unlock_user_pages(nv_user_memory->pages_count,
nv_user_memory->pages);
nv_drm_free(nv_user_memory);
}
static struct sg_table *__nv_drm_gem_user_memory_prime_get_sg_table(
struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_gem_user_memory *nv_user_memory = to_nv_user_memory(nv_gem);
struct drm_gem_object *gem = &nv_gem->base;
return nv_drm_prime_pages_to_sg(gem->dev,
nv_user_memory->pages,
nv_user_memory->pages_count);
}
static void *__nv_drm_gem_user_memory_prime_vmap(
struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_gem_user_memory *nv_user_memory = to_nv_user_memory(nv_gem);
return nv_drm_vmap(nv_user_memory->pages,
nv_user_memory->pages_count);
}
static void __nv_drm_gem_user_memory_prime_vunmap(
struct nv_drm_gem_object *gem,
void *address)
{
nv_drm_vunmap(address);
}
static int __nv_drm_gem_user_memory_mmap(struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma)
{
int ret = drm_gem_mmap_obj(&nv_gem->base,
drm_vma_node_size(&nv_gem->base.vma_node) << PAGE_SHIFT, vma);
if (ret < 0) {
return ret;
}
/*
* Enforce that user-memory GEM mappings are MAP_SHARED, to prevent COW
* with MAP_PRIVATE and VM_MIXEDMAP
*/
if (!(vma->vm_flags & VM_SHARED)) {
return -EINVAL;
}
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_flags &= ~VM_IO;
vma->vm_flags |= VM_MIXEDMAP;
return 0;
}
static vm_fault_t __nv_drm_gem_user_memory_handle_vma_fault(
struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma,
struct vm_fault *vmf)
{
struct nv_drm_gem_user_memory *nv_user_memory = to_nv_user_memory(nv_gem);
unsigned long address = nv_page_fault_va(vmf);
struct drm_gem_object *gem = vma->vm_private_data;
unsigned long page_offset;
vm_fault_t ret;
page_offset = vmf->pgoff - drm_vma_node_start(&gem->vma_node);
BUG_ON(page_offset > nv_user_memory->pages_count);
ret = vm_insert_page(vma, address, nv_user_memory->pages[page_offset]);
switch (ret) {
case 0:
case -EBUSY:
/*
* EBUSY indicates that another thread already handled
* the faulted range.
*/
ret = VM_FAULT_NOPAGE;
break;
case -ENOMEM:
ret = VM_FAULT_OOM;
break;
default:
WARN_ONCE(1, "Unhandled error in %s: %d\n", __FUNCTION__, ret);
ret = VM_FAULT_SIGBUS;
break;
}
return ret;
}
static int __nv_drm_gem_user_create_mmap_offset(
struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
uint64_t *offset)
{
(void)nv_dev;
return nv_drm_gem_create_mmap_offset(nv_gem, offset);
}
const struct nv_drm_gem_object_funcs __nv_gem_user_memory_ops = {
.free = __nv_drm_gem_user_memory_free,
.prime_get_sg_table = __nv_drm_gem_user_memory_prime_get_sg_table,
.prime_vmap = __nv_drm_gem_user_memory_prime_vmap,
.prime_vunmap = __nv_drm_gem_user_memory_prime_vunmap,
.mmap = __nv_drm_gem_user_memory_mmap,
.handle_vma_fault = __nv_drm_gem_user_memory_handle_vma_fault,
.create_mmap_offset = __nv_drm_gem_user_create_mmap_offset,
};
int nv_drm_gem_import_userspace_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_import_userspace_memory_params *params = data;
struct nv_drm_gem_user_memory *nv_user_memory;
struct page **pages = NULL;
unsigned long pages_count = 0;
int ret = 0;
if ((params->size % PAGE_SIZE) != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Userspace memory 0x%llx size should be in a multiple of page "
"size to create a gem object",
params->address);
return -EINVAL;
}
pages_count = params->size / PAGE_SIZE;
ret = nv_drm_lock_user_pages(params->address, pages_count, &pages);
if (ret != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lock user pages for address 0x%llx: %d",
params->address, ret);
return ret;
}
if ((nv_user_memory =
nv_drm_calloc(1, sizeof(*nv_user_memory))) == NULL) {
ret = -ENOMEM;
goto failed;
}
nv_user_memory->pages = pages;
nv_user_memory->pages_count = pages_count;
nv_drm_gem_object_init(nv_dev,
&nv_user_memory->base,
&__nv_gem_user_memory_ops,
params->size,
NULL /* pMemory */);
return nv_drm_gem_handle_create_drop_reference(filep,
&nv_user_memory->base,
&params->handle);
failed:
nv_drm_unlock_user_pages(pages_count, pages);
return ret;
}
#endif

View File

@@ -0,0 +1,72 @@
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_GEM_USER_MEMORY_H__
#define __NVIDIA_DRM_GEM_USER_MEMORY_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#include "nvidia-drm-gem.h"
struct nv_drm_gem_user_memory {
struct nv_drm_gem_object base;
struct page **pages;
unsigned long pages_count;
};
extern const struct nv_drm_gem_object_funcs __nv_gem_user_memory_ops;
static inline struct nv_drm_gem_user_memory *to_nv_user_memory(
struct nv_drm_gem_object *nv_gem)
{
if (nv_gem != NULL) {
return container_of(nv_gem, struct nv_drm_gem_user_memory, base);
}
return NULL;
}
int nv_drm_gem_import_userspace_memory_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
static inline
struct nv_drm_gem_user_memory *nv_drm_gem_object_user_memory_lookup(
struct drm_device *dev,
struct drm_file *filp,
u32 handle)
{
struct nv_drm_gem_object *nv_gem =
nv_drm_gem_object_lookup(dev, filp, handle);
if (nv_gem != NULL && nv_gem->ops != &__nv_gem_user_memory_ops) {
nv_drm_gem_object_unreference_unlocked(nv_gem);
return NULL;
}
return to_nv_user_memory(nv_gem);
}
#endif
#endif /* __NVIDIA_DRM_GEM_USER_MEMORY_H__ */

View File

@@ -0,0 +1,399 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#include "nvidia-drm-priv.h"
#include "nvidia-drm-ioctl.h"
#include "nvidia-drm-prime-fence.h"
#include "nvidia-drm-gem.h"
#include "nvidia-drm-gem-nvkms-memory.h"
#include "nvidia-drm-gem-user-memory.h"
#include "nvidia-dma-resv-helper.h"
#include "nvidia-drm-helper.h"
#include "nvidia-drm-gem-dma-buf.h"
#include "nvidia-drm-gem-nvkms-memory.h"
#if defined(NV_DRM_DRM_DRV_H_PRESENT)
#include <drm/drm_drv.h>
#endif
#if defined(NV_DRM_DRM_PRIME_H_PRESENT)
#include <drm/drm_prime.h>
#endif
#if defined(NV_DRM_DRM_FILE_H_PRESENT)
#include <drm/drm_file.h>
#endif
#include "linux/dma-buf.h"
#include "nv-mm.h"
void nv_drm_gem_free(struct drm_gem_object *gem)
{
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(gem);
/* Cleanup core gem object */
drm_gem_object_release(&nv_gem->base);
#if defined(NV_DRM_FENCE_AVAILABLE) && !defined(NV_DRM_GEM_OBJECT_HAS_RESV)
nv_dma_resv_fini(&nv_gem->resv);
#endif
nv_gem->ops->free(nv_gem);
}
#if !defined(NV_DRM_DRIVER_HAS_GEM_PRIME_CALLBACKS) && \
defined(NV_DRM_GEM_OBJECT_VMAP_HAS_MAP_ARG)
/*
* The 'dma_buf_map' structure is renamed to 'iosys_map' by the commit
* 7938f4218168 ("dma-buf-map: Rename to iosys-map").
*/
#if defined(NV_LINUX_IOSYS_MAP_H_PRESENT)
typedef struct iosys_map nv_sysio_map_t;
#else
typedef struct dma_buf_map nv_sysio_map_t;
#endif
static int nv_drm_gem_vmap(struct drm_gem_object *gem,
nv_sysio_map_t *map)
{
map->vaddr = nv_drm_gem_prime_vmap(gem);
if (map->vaddr == NULL) {
return -ENOMEM;
}
map->is_iomem = true;
return 0;
}
static void nv_drm_gem_vunmap(struct drm_gem_object *gem,
nv_sysio_map_t *map)
{
nv_drm_gem_prime_vunmap(gem, map->vaddr);
map->vaddr = NULL;
}
#endif
#if !defined(NV_DRM_DRIVER_HAS_GEM_FREE_OBJECT) || \
!defined(NV_DRM_DRIVER_HAS_GEM_PRIME_CALLBACKS)
static struct drm_gem_object_funcs nv_drm_gem_funcs = {
.free = nv_drm_gem_free,
.get_sg_table = nv_drm_gem_prime_get_sg_table,
#if !defined(NV_DRM_DRIVER_HAS_GEM_PRIME_CALLBACKS)
.export = drm_gem_prime_export,
#if defined(NV_DRM_GEM_OBJECT_VMAP_HAS_MAP_ARG)
.vmap = nv_drm_gem_vmap,
.vunmap = nv_drm_gem_vunmap,
#else
.vmap = nv_drm_gem_prime_vmap,
.vunmap = nv_drm_gem_prime_vunmap,
#endif
.vm_ops = &nv_drm_gem_vma_ops,
#endif
};
#endif
void nv_drm_gem_object_init(struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
const struct nv_drm_gem_object_funcs * const ops,
size_t size,
struct NvKmsKapiMemory *pMemory)
{
struct drm_device *dev = nv_dev->dev;
nv_gem->nv_dev = nv_dev;
nv_gem->ops = ops;
nv_gem->pMemory = pMemory;
/* Initialize the gem object */
#if defined(NV_DRM_FENCE_AVAILABLE)
nv_dma_resv_init(&nv_gem->resv);
#if defined(NV_DRM_GEM_OBJECT_HAS_RESV)
nv_gem->base.resv = &nv_gem->resv;
#endif
#endif
#if !defined(NV_DRM_DRIVER_HAS_GEM_FREE_OBJECT)
nv_gem->base.funcs = &nv_drm_gem_funcs;
#endif
drm_gem_private_object_init(dev, &nv_gem->base, size);
}
struct drm_gem_object *nv_drm_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf)
{
#if defined(NV_DMA_BUF_OWNER_PRESENT)
struct drm_gem_object *gem_dst;
struct nv_drm_gem_object *nv_gem_src;
if (dma_buf->owner == dev->driver->fops->owner) {
nv_gem_src = to_nv_gem_object(dma_buf->priv);
if (nv_gem_src->base.dev != dev &&
nv_gem_src->ops->prime_dup != NULL) {
/*
* If we're importing from another NV device, try to handle the
* import internally rather than attaching through the dma-buf
* mechanisms. Importing from the same device is even easier,
* and drm_gem_prime_import() handles that just fine.
*/
gem_dst = nv_gem_src->ops->prime_dup(dev, nv_gem_src);
if (gem_dst)
return gem_dst;
}
}
#endif /* NV_DMA_BUF_OWNER_PRESENT */
return drm_gem_prime_import(dev, dma_buf);
}
struct sg_table *nv_drm_gem_prime_get_sg_table(struct drm_gem_object *gem)
{
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(gem);
if (nv_gem->ops->prime_get_sg_table != NULL) {
return nv_gem->ops->prime_get_sg_table(nv_gem);
}
return ERR_PTR(-ENOTSUPP);
}
void *nv_drm_gem_prime_vmap(struct drm_gem_object *gem)
{
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(gem);
if (nv_gem->ops->prime_vmap != NULL) {
return nv_gem->ops->prime_vmap(nv_gem);
}
return ERR_PTR(-ENOTSUPP);
}
void nv_drm_gem_prime_vunmap(struct drm_gem_object *gem, void *address)
{
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(gem);
if (nv_gem->ops->prime_vunmap != NULL) {
nv_gem->ops->prime_vunmap(nv_gem, address);
}
}
#if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
nv_dma_resv_t* nv_drm_gem_prime_res_obj(struct drm_gem_object *obj)
{
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(obj);
return &nv_gem->resv;
}
#endif
int nv_drm_gem_map_offset_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_map_offset_params *params = data;
struct nv_drm_gem_object *nv_gem;
int ret;
if ((nv_gem = nv_drm_gem_object_lookup(dev,
filep,
params->handle)) == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup gem object for map: 0x%08x",
params->handle);
return -EINVAL;
}
if (nv_gem->ops->create_mmap_offset) {
ret = nv_gem->ops->create_mmap_offset(nv_dev, nv_gem, &params->offset);
} else {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Gem object type does not support mapping: 0x%08x",
params->handle);
ret = -EINVAL;
}
nv_drm_gem_object_unreference_unlocked(nv_gem);
return ret;
}
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
int nv_drm_mmap(struct file *file, struct vm_area_struct *vma)
{
struct drm_file *priv = file->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_gem_object *obj = NULL;
struct drm_vma_offset_node *node;
int ret = 0;
struct nv_drm_gem_object *nv_gem;
drm_vma_offset_lock_lookup(dev->vma_offset_manager);
node = nv_drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager,
vma->vm_pgoff, vma_pages(vma));
if (likely(node)) {
obj = container_of(node, struct drm_gem_object, vma_node);
/*
* When the object is being freed, after it hits 0-refcnt it proceeds
* to tear down the object. In the process it will attempt to remove
* the VMA offset and so acquire this mgr->vm_lock. Therefore if we
* find an object with a 0-refcnt that matches our range, we know it is
* in the process of being destroyed and will be freed as soon as we
* release the lock - so we have to check for the 0-refcnted object and
* treat it as invalid.
*/
if (!kref_get_unless_zero(&obj->refcount))
obj = NULL;
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
if (!obj)
return -EINVAL;
nv_gem = to_nv_gem_object(obj);
if (nv_gem->ops->mmap == NULL) {
ret = -EINVAL;
goto done;
}
if (!nv_drm_vma_node_is_allowed(node, file)) {
ret = -EACCES;
goto done;
}
#if defined(NV_DRM_VMA_OFFSET_NODE_HAS_READONLY)
if (node->readonly) {
if (vma->vm_flags & VM_WRITE) {
ret = -EINVAL;
goto done;
}
vma->vm_flags &= ~VM_MAYWRITE;
}
#endif
ret = nv_gem->ops->mmap(nv_gem, vma);
done:
nv_drm_gem_object_unreference_unlocked(nv_gem);
return ret;
}
#endif
int nv_drm_gem_identify_object_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct drm_nvidia_gem_identify_object_params *p = data;
struct nv_drm_gem_dma_buf *nv_dma_buf;
struct nv_drm_gem_nvkms_memory *nv_nvkms_memory;
struct nv_drm_gem_user_memory *nv_user_memory;
struct nv_drm_gem_object *nv_gem = NULL;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EINVAL;
}
nv_dma_buf = nv_drm_gem_object_dma_buf_lookup(dev, filep, p->handle);
if (nv_dma_buf) {
p->object_type = NV_GEM_OBJECT_DMABUF;
nv_gem = &nv_dma_buf->base;
goto done;
}
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
nv_nvkms_memory = nv_drm_gem_object_nvkms_memory_lookup(dev, filep, p->handle);
if (nv_nvkms_memory) {
p->object_type = NV_GEM_OBJECT_NVKMS;
nv_gem = &nv_nvkms_memory->base;
goto done;
}
#endif
nv_user_memory = nv_drm_gem_object_user_memory_lookup(dev, filep, p->handle);
if (nv_user_memory) {
p->object_type = NV_GEM_OBJECT_USERMEMORY;
nv_gem = &nv_user_memory->base;
goto done;
}
p->object_type = NV_GEM_OBJECT_UNKNOWN;
done:
if (nv_gem) {
nv_drm_gem_object_unreference_unlocked(nv_gem);
}
return 0;
}
/* XXX Move these vma operations to os layer */
static vm_fault_t __nv_drm_vma_fault(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
struct drm_gem_object *gem = vma->vm_private_data;
struct nv_drm_gem_object *nv_gem = to_nv_gem_object(gem);
if (!nv_gem) {
return VM_FAULT_SIGBUS;
}
return nv_gem->ops->handle_vma_fault(nv_gem, vma, vmf);
}
/*
* Note that nv_drm_vma_fault() can be called for different or same
* ranges of the same drm_gem_object simultaneously.
*/
#if defined(NV_VM_OPS_FAULT_REMOVED_VMA_ARG)
static vm_fault_t nv_drm_vma_fault(struct vm_fault *vmf)
{
return __nv_drm_vma_fault(vmf->vma, vmf);
}
#else
static vm_fault_t nv_drm_vma_fault(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
return __nv_drm_vma_fault(vma, vmf);
}
#endif
const struct vm_operations_struct nv_drm_gem_vma_ops = {
.open = drm_gem_vm_open,
.fault = nv_drm_vma_fault,
.close = drm_gem_vm_close,
};
#endif /* NV_DRM_AVAILABLE */

View File

@@ -0,0 +1,211 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_GEM_H__
#define __NVIDIA_DRM_GEM_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#include "nvidia-drm-priv.h"
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_GEM_H_PRESENT)
#include <drm/drm_gem.h>
#endif
#include "nvkms-kapi.h"
#include "nv-mm.h"
#if defined(NV_DRM_FENCE_AVAILABLE)
#include "nvidia-dma-fence-helper.h"
#include "nvidia-dma-resv-helper.h"
#endif
struct nv_drm_gem_object;
struct nv_drm_gem_object_funcs {
void (*free)(struct nv_drm_gem_object *nv_gem);
struct sg_table *(*prime_get_sg_table)(struct nv_drm_gem_object *nv_gem);
void *(*prime_vmap)(struct nv_drm_gem_object *nv_gem);
void (*prime_vunmap)(struct nv_drm_gem_object *nv_gem, void *address);
struct drm_gem_object *(*prime_dup)(struct drm_device *dev,
const struct nv_drm_gem_object *nv_gem_src);
int (*mmap)(struct nv_drm_gem_object *nv_gem, struct vm_area_struct *vma);
vm_fault_t (*handle_vma_fault)(struct nv_drm_gem_object *nv_gem,
struct vm_area_struct *vma,
struct vm_fault *vmf);
int (*create_mmap_offset)(struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
uint64_t *offset);
};
struct nv_drm_gem_object {
struct drm_gem_object base;
struct nv_drm_device *nv_dev;
const struct nv_drm_gem_object_funcs *ops;
struct NvKmsKapiMemory *pMemory;
#if defined(NV_DRM_FENCE_AVAILABLE)
nv_dma_resv_t resv;
#endif
};
static inline struct nv_drm_gem_object *to_nv_gem_object(
struct drm_gem_object *gem)
{
if (gem != NULL) {
return container_of(gem, struct nv_drm_gem_object, base);
}
return NULL;
}
/*
* drm_gem_object_{get/put}() added by commit
* e6b62714e87c8811d5564b6a0738dcde63a51774 (2017-02-28) and
* drm_gem_object_{reference/unreference}() removed by commit
* 3e70fd160cf0b1945225eaa08dd2cb8544f21cb8 (2018-11-15).
*/
static inline void
nv_drm_gem_object_unreference_unlocked(struct nv_drm_gem_object *nv_gem)
{
#if defined(NV_DRM_GEM_OBJECT_GET_PRESENT)
#if defined(NV_DRM_GEM_OBJECT_PUT_UNLOCK_PRESENT)
drm_gem_object_put_unlocked(&nv_gem->base);
#else
drm_gem_object_put(&nv_gem->base);
#endif
#else
drm_gem_object_unreference_unlocked(&nv_gem->base);
#endif
}
static inline void
nv_drm_gem_object_unreference(struct nv_drm_gem_object *nv_gem)
{
#if defined(NV_DRM_GEM_OBJECT_GET_PRESENT)
drm_gem_object_put(&nv_gem->base);
#else
drm_gem_object_unreference(&nv_gem->base);
#endif
}
static inline int nv_drm_gem_handle_create_drop_reference(
struct drm_file *file_priv,
struct nv_drm_gem_object *nv_gem,
uint32_t *handle)
{
int ret = drm_gem_handle_create(file_priv, &nv_gem->base, handle);
/* drop reference from allocate - handle holds it now */
nv_drm_gem_object_unreference_unlocked(nv_gem);
return ret;
}
static inline int nv_drm_gem_create_mmap_offset(
struct nv_drm_gem_object *nv_gem,
uint64_t *offset)
{
int ret;
if ((ret = drm_gem_create_mmap_offset(&nv_gem->base)) < 0) {
NV_DRM_DEV_LOG_ERR(
nv_gem->nv_dev,
"drm_gem_create_mmap_offset failed with error code %d",
ret);
goto done;
}
*offset = drm_vma_node_offset_addr(&nv_gem->base.vma_node);
done:
return ret;
}
void nv_drm_gem_free(struct drm_gem_object *gem);
static inline struct nv_drm_gem_object *nv_drm_gem_object_lookup(
struct drm_device *dev,
struct drm_file *filp,
u32 handle)
{
#if (NV_DRM_GEM_OBJECT_LOOKUP_ARGUMENT_COUNT == 3)
return to_nv_gem_object(drm_gem_object_lookup(dev, filp, handle));
#elif (NV_DRM_GEM_OBJECT_LOOKUP_ARGUMENT_COUNT == 2)
return to_nv_gem_object(drm_gem_object_lookup(filp, handle));
#else
#error "Unknown argument count of drm_gem_object_lookup()"
#endif
}
static inline int nv_drm_gem_handle_create(struct drm_file *filp,
struct nv_drm_gem_object *nv_gem,
uint32_t *handle)
{
return drm_gem_handle_create(filp, &nv_gem->base, handle);
}
void nv_drm_gem_object_init(struct nv_drm_device *nv_dev,
struct nv_drm_gem_object *nv_gem,
const struct nv_drm_gem_object_funcs * const ops,
size_t size,
struct NvKmsKapiMemory *pMemory);
struct drm_gem_object *nv_drm_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
struct sg_table *nv_drm_gem_prime_get_sg_table(struct drm_gem_object *gem);
void *nv_drm_gem_prime_vmap(struct drm_gem_object *gem);
void nv_drm_gem_prime_vunmap(struct drm_gem_object *gem, void *address);
#if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
nv_dma_resv_t* nv_drm_gem_prime_res_obj(struct drm_gem_object *obj);
#endif
extern const struct vm_operations_struct nv_drm_gem_vma_ops;
int nv_drm_gem_map_offset_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_mmap(struct file *file, struct vm_area_struct *vma);
int nv_drm_gem_identify_object_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
#endif /* NV_DRM_AVAILABLE */
#endif /* __NVIDIA_DRM_GEM_H__ */

View File

@@ -0,0 +1,191 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
/*
* This file contains snapshots of DRM helper functions from the
* Linux kernel which are used by nvidia-drm.ko if the target kernel
* predates the helper function. Having these functions consistently
* present simplifies nvidia-drm.ko source.
*/
#include "nvidia-drm-helper.h"
#include "nvmisc.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_ATOMIC_UAPI_H_PRESENT)
#include <drm/drm_atomic_uapi.h>
#endif
static void __nv_drm_framebuffer_put(struct drm_framebuffer *fb)
{
#if defined(NV_DRM_FRAMEBUFFER_GET_PRESENT)
drm_framebuffer_put(fb);
#else
drm_framebuffer_unreference(fb);
#endif
}
/*
* drm_atomic_helper_disable_all() has been added by commit
* 1494276000db789c6d2acd85747be4707051c801, which is Signed-off-by:
* Thierry Reding <treding@nvidia.com>
* Daniel Vetter <daniel.vetter@ffwll.ch>
*
* drm_atomic_helper_disable_all() is copied from
* linux/drivers/gpu/drm/drm_atomic_helper.c and modified to use
* nv_drm_for_each_crtc instead of drm_for_each_crtc to loop over all crtcs,
* use nv_drm_for_each_*_in_state instead of for_each_connector_in_state to loop
* over all modeset object states, and use drm_atomic_state_free() if
* drm_atomic_state_put() is not available.
*
* drm_atomic_helper_disable_all() is copied from
* linux/drivers/gpu/drm/drm_atomic_helper.c @
* 49d70aeaeca8f62b72b7712ecd1e29619a445866, which has the following
* copyright and license information:
*
* Copyright (C) 2014 Red Hat
* Copyright (C) 2014 Intel Corp.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors:
* Rob Clark <robdclark@gmail.com>
* Daniel Vetter <daniel.vetter@ffwll.ch>
*/
int nv_drm_atomic_helper_disable_all(struct drm_device *dev,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_atomic_state *state;
struct drm_connector_state *conn_state;
struct drm_connector *conn;
struct drm_plane_state *plane_state;
struct drm_plane *plane;
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
unsigned plane_mask = 0;
int ret, i;
state = drm_atomic_state_alloc(dev);
if (!state)
return -ENOMEM;
state->acquire_ctx = ctx;
nv_drm_for_each_crtc(crtc, dev) {
crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(crtc_state)) {
ret = PTR_ERR(crtc_state);
goto free;
}
crtc_state->active = false;
ret = drm_atomic_set_mode_prop_for_crtc(crtc_state, NULL);
if (ret < 0)
goto free;
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret < 0)
goto free;
ret = drm_atomic_add_affected_connectors(state, crtc);
if (ret < 0)
goto free;
}
nv_drm_for_each_connector_in_state(state, conn, conn_state, i) {
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
if (ret < 0)
goto free;
}
nv_drm_for_each_plane_in_state(state, plane, plane_state, i) {
ret = drm_atomic_set_crtc_for_plane(plane_state, NULL);
if (ret < 0)
goto free;
drm_atomic_set_fb_for_plane(plane_state, NULL);
plane_mask |= NVBIT(drm_plane_index(plane));
plane->old_fb = plane->fb;
}
ret = drm_atomic_commit(state);
free:
if (plane_mask) {
drm_for_each_plane_mask(plane, dev, plane_mask) {
if (ret == 0) {
plane->fb = NULL;
plane->crtc = NULL;
WARN_ON(plane->state->fb);
WARN_ON(plane->state->crtc);
if (plane->old_fb)
__nv_drm_framebuffer_put(plane->old_fb);
}
plane->old_fb = NULL;
}
}
#if defined(NV_DRM_ATOMIC_STATE_REF_COUNTING_PRESENT)
drm_atomic_state_put(state);
#else
if (ret != 0) {
drm_atomic_state_free(state);
} else {
/*
* In case of success, drm_atomic_commit() takes care to cleanup and
* free @state.
*
* Comment placed above drm_atomic_commit() says: The caller must not
* free or in any other way access @state. If the function fails then
* the caller must clean up @state itself.
*/
}
#endif
return ret;
}
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */

View File

@@ -0,0 +1,584 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_HELPER_H__
#define __NVIDIA_DRM_HELPER_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_DRV_H_PRESENT)
#include <drm/drm_drv.h>
#endif
/*
* drm_dev_put() is added by commit 9a96f55034e41b4e002b767e9218d55f03bdff7d
* (2017-09-26) and drm_dev_unref() is removed by
* ba1d345401476a5f7fbad622607c5a1f95e59b31 (2018-11-15).
*
* drm_dev_unref() has been added and drm_dev_free() removed by commit -
*
* 2014-01-29: 099d1c290e2ebc3b798961a6c177c3aef5f0b789
*/
static inline void nv_drm_dev_free(struct drm_device *dev)
{
#if defined(NV_DRM_DEV_PUT_PRESENT)
drm_dev_put(dev);
#elif defined(NV_DRM_DEV_UNREF_PRESENT)
drm_dev_unref(dev);
#else
drm_dev_free(dev);
#endif
}
#if defined(NV_DRM_DRM_PRIME_H_PRESENT)
#include <drm/drm_prime.h>
#endif
static inline struct sg_table*
nv_drm_prime_pages_to_sg(struct drm_device *dev,
struct page **pages, unsigned int nr_pages)
{
#if defined(NV_DRM_PRIME_PAGES_TO_SG_HAS_DRM_DEVICE_ARG)
return drm_prime_pages_to_sg(dev, pages, nr_pages);
#else
return drm_prime_pages_to_sg(pages, nr_pages);
#endif
}
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
/*
* drm_for_each_connector(), drm_for_each_crtc(), drm_for_each_fb(),
* drm_for_each_encoder and drm_for_each_plane() were added by kernel
* commit 6295d607ad34ee4e43aab3f20714c2ef7a6adea1 which was
* Signed-off-by:
* Daniel Vetter <daniel.vetter@intel.com>
* drm_for_each_connector(), drm_for_each_crtc(), drm_for_each_fb(),
* drm_for_each_encoder and drm_for_each_plane() are copied from
* include/drm/drm_crtc @
* 6295d607ad34ee4e43aab3f20714c2ef7a6adea1
* which has the following copyright and license information:
*
* Copyright © 2006 Keith Packard
* Copyright © 2007-2008 Dave Airlie
* Copyright © 2007-2008 Intel Corporation
* Jesse Barnes <jesse.barnes@intel.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_crtc.h>
#if defined(drm_for_each_plane)
#define nv_drm_for_each_plane(plane, dev) \
drm_for_each_plane(plane, dev)
#else
#define nv_drm_for_each_plane(plane, dev) \
list_for_each_entry(plane, &(dev)->mode_config.plane_list, head)
#endif
#if defined(drm_for_each_crtc)
#define nv_drm_for_each_crtc(crtc, dev) \
drm_for_each_crtc(crtc, dev)
#else
#define nv_drm_for_each_crtc(crtc, dev) \
list_for_each_entry(crtc, &(dev)->mode_config.crtc_list, head)
#endif
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
#define nv_drm_for_each_connector(connector, conn_iter, dev) \
drm_for_each_connector_iter(connector, conn_iter)
#elif defined(drm_for_each_connector)
#define nv_drm_for_each_connector(connector, conn_iter, dev) \
drm_for_each_connector(connector, dev)
#else
#define nv_drm_for_each_connector(connector, conn_iter, dev) \
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); \
list_for_each_entry(connector, &(dev)->mode_config.connector_list, head)
#endif
#if defined(drm_for_each_encoder)
#define nv_drm_for_each_encoder(encoder, dev) \
drm_for_each_encoder(encoder, dev)
#else
#define nv_drm_for_each_encoder(encoder, dev) \
list_for_each_entry(encoder, &(dev)->mode_config.encoder_list, head)
#endif
#if defined(drm_for_each_fb)
#define nv_drm_for_each_fb(fb, dev) \
drm_for_each_fb(fb, dev)
#else
#define nv_drm_for_each_fb(fb, dev) \
list_for_each_entry(fb, &(dev)->mode_config.fb_list, head)
#endif
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
int nv_drm_atomic_helper_disable_all(struct drm_device *dev,
struct drm_modeset_acquire_ctx *ctx);
/*
* for_each_connector_in_state(), for_each_crtc_in_state() and
* for_each_plane_in_state() were added by kernel commit
* df63b9994eaf942afcdb946d27a28661d7dfbf2a which was Signed-off-by:
* Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
* Daniel Vetter <daniel.vetter@ffwll.ch>
*
* for_each_connector_in_state(), for_each_crtc_in_state() and
* for_each_plane_in_state() were copied from
* include/drm/drm_atomic.h @
* 21a01abbe32a3cbeb903378a24e504bfd9fe0648
* which has the following copyright and license information:
*
* Copyright (C) 2014 Red Hat
* Copyright (C) 2014 Intel Corp.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors:
* Rob Clark <robdclark@gmail.com>
* Daniel Vetter <daniel.vetter@ffwll.ch>
*/
/**
* nv_drm_for_each_connector_in_state - iterate over all connectors in an
* atomic update
* @__state: &struct drm_atomic_state pointer
* @connector: &struct drm_connector iteration cursor
* @connector_state: &struct drm_connector_state iteration cursor
* @__i: int iteration cursor, for macro-internal use
*
* This iterates over all connectors in an atomic update. Note that before the
* software state is committed (by calling drm_atomic_helper_swap_state(), this
* points to the new state, while afterwards it points to the old state. Due to
* this tricky confusion this macro is deprecated.
*/
#if !defined(for_each_connector_in_state)
#define nv_drm_for_each_connector_in_state(__state, \
connector, connector_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->num_connector && \
((connector) = (__state)->connectors[__i].ptr, \
(connector_state) = (__state)->connectors[__i].state, 1); \
(__i)++) \
for_each_if (connector)
#else
#define nv_drm_for_each_connector_in_state(__state, \
connector, connector_state, __i) \
for_each_connector_in_state(__state, connector, connector_state, __i)
#endif
/**
* nv_drm_for_each_crtc_in_state - iterate over all CRTCs in an atomic update
* @__state: &struct drm_atomic_state pointer
* @crtc: &struct drm_crtc iteration cursor
* @crtc_state: &struct drm_crtc_state iteration cursor
* @__i: int iteration cursor, for macro-internal use
*
* This iterates over all CRTCs in an atomic update. Note that before the
* software state is committed (by calling drm_atomic_helper_swap_state(), this
* points to the new state, while afterwards it points to the old state. Due to
* this tricky confusion this macro is deprecated.
*/
#if !defined(for_each_crtc_in_state)
#define nv_drm_for_each_crtc_in_state(__state, crtc, crtc_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->dev->mode_config.num_crtc && \
((crtc) = (__state)->crtcs[__i].ptr, \
(crtc_state) = (__state)->crtcs[__i].state, 1); \
(__i)++) \
for_each_if (crtc_state)
#else
#define nv_drm_for_each_crtc_in_state(__state, crtc, crtc_state, __i) \
for_each_crtc_in_state(__state, crtc, crtc_state, __i)
#endif
/**
* nv_drm_for_each_plane_in_state - iterate over all planes in an atomic update
* @__state: &struct drm_atomic_state pointer
* @plane: &struct drm_plane iteration cursor
* @plane_state: &struct drm_plane_state iteration cursor
* @__i: int iteration cursor, for macro-internal use
*
* This iterates over all planes in an atomic update. Note that before the
* software state is committed (by calling drm_atomic_helper_swap_state(), this
* points to the new state, while afterwards it points to the old state. Due to
* this tricky confusion this macro is deprecated.
*/
#if !defined(for_each_plane_in_state)
#define nv_drm_for_each_plane_in_state(__state, plane, plane_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->dev->mode_config.num_total_plane && \
((plane) = (__state)->planes[__i].ptr, \
(plane_state) = (__state)->planes[__i].state, 1); \
(__i)++) \
for_each_if (plane_state)
#else
#define nv_drm_for_each_plane_in_state(__state, plane, plane_state, __i) \
for_each_plane_in_state(__state, plane, plane_state, __i)
#endif
static inline struct drm_crtc *nv_drm_crtc_find(struct drm_device *dev,
uint32_t id)
{
#if defined(NV_DRM_MODE_OBJECT_FIND_HAS_FILE_PRIV_ARG)
return drm_crtc_find(dev, NULL /* file_priv */, id);
#else
return drm_crtc_find(dev, id);
#endif
}
static inline struct drm_encoder *nv_drm_encoder_find(struct drm_device *dev,
uint32_t id)
{
#if defined(NV_DRM_MODE_OBJECT_FIND_HAS_FILE_PRIV_ARG)
return drm_encoder_find(dev, NULL /* file_priv */, id);
#else
return drm_encoder_find(dev, id);
#endif
}
/*
* drm_connector_for_each_possible_encoder() is added by commit
* 83aefbb887b59df0b3520965c3701e01deacfc52 which was Signed-off-by:
* Ville Syrjälä <ville.syrjala@linux.intel.com>
*
* drm_connector_for_each_possible_encoder() is copied from
* include/drm/drm_connector.h and modified to use nv_drm_encoder_find()
* instead of drm_encoder_find().
*
* drm_connector_for_each_possible_encoder() is copied from
* include/drm/drm_connector.h @
* 83aefbb887b59df0b3520965c3701e01deacfc52
* which has the following copyright and license information:
*
* Copyright (c) 2016 Intel Corporation
*
* Permission to use, copy, modify, distribute, and sell this software and its
* documentation for any purpose is hereby granted without fee, provided that
* the above copyright notice appear in all copies and that both that copyright
* notice and this permission notice appear in supporting documentation, and
* that the name of the copyright holders not be used in advertising or
* publicity pertaining to distribution of the software without specific,
* written prior permission. The copyright holders make no representations
* about the suitability of this software for any purpose. It is provided "as
* is" without express or implied warranty.
*
* THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
* INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
* EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR
* CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
* DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
* TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
* OF THIS SOFTWARE.
*/
#if defined(NV_DRM_DRM_CONNECTOR_H_PRESENT)
#include <drm/drm_connector.h>
#endif
/**
* nv_drm_connector_for_each_possible_encoder - iterate connector's possible
* encoders
* @connector: &struct drm_connector pointer
* @encoder: &struct drm_encoder pointer used as cursor
* @__i: int iteration cursor, for macro-internal use
*/
#if !defined(drm_connector_for_each_possible_encoder)
#if !defined(for_each_if)
#define for_each_if(condition) if (!(condition)) {} else
#endif
#define __nv_drm_connector_for_each_possible_encoder(connector, encoder, __i) \
for ((__i) = 0; (__i) < ARRAY_SIZE((connector)->encoder_ids) && \
(connector)->encoder_ids[(__i)] != 0; (__i)++) \
for_each_if((encoder) = \
nv_drm_encoder_find((connector)->dev, \
(connector)->encoder_ids[(__i)]))
#define nv_drm_connector_for_each_possible_encoder(connector, encoder) \
{ \
unsigned int __i; \
__nv_drm_connector_for_each_possible_encoder(connector, encoder, __i)
#define nv_drm_connector_for_each_possible_encoder_end \
}
#else
#if NV_DRM_CONNECTOR_FOR_EACH_POSSIBLE_ENCODER_ARGUMENT_COUNT == 3
#define nv_drm_connector_for_each_possible_encoder(connector, encoder) \
{ \
unsigned int __i; \
drm_connector_for_each_possible_encoder(connector, encoder, __i)
#define nv_drm_connector_for_each_possible_encoder_end \
}
#else
#define nv_drm_connector_for_each_possible_encoder(connector, encoder) \
drm_connector_for_each_possible_encoder(connector, encoder)
#define nv_drm_connector_for_each_possible_encoder_end
#endif
#endif
static inline int
nv_drm_connector_attach_encoder(struct drm_connector *connector,
struct drm_encoder *encoder)
{
#if defined(NV_DRM_CONNECTOR_FUNCS_HAVE_MODE_IN_NAME)
return drm_mode_connector_attach_encoder(connector, encoder);
#else
return drm_connector_attach_encoder(connector, encoder);
#endif
}
static inline int
nv_drm_connector_update_edid_property(struct drm_connector *connector,
const struct edid *edid)
{
#if defined(NV_DRM_CONNECTOR_FUNCS_HAVE_MODE_IN_NAME)
return drm_mode_connector_update_edid_property(connector, edid);
#else
return drm_connector_update_edid_property(connector, edid);
#endif
}
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
#include <drm/drm_connector.h>
static inline
void nv_drm_connector_list_iter_begin(struct drm_device *dev,
struct drm_connector_list_iter *iter)
{
#if defined(NV_DRM_CONNECTOR_LIST_ITER_BEGIN_PRESENT)
drm_connector_list_iter_begin(dev, iter);
#else
drm_connector_list_iter_get(dev, iter);
#endif
}
static inline
void nv_drm_connector_list_iter_end(struct drm_connector_list_iter *iter)
{
#if defined(NV_DRM_CONNECTOR_LIST_ITER_BEGIN_PRESENT)
drm_connector_list_iter_end(iter);
#else
drm_connector_list_iter_put(iter);
#endif
}
#endif
/*
* The drm_format_num_planes() function was added by commit d0d110e09629 drm:
* Add drm_format_num_planes() utility function in v3.3 (2011-12-20). Prototype
* was moved from drm_crtc.h to drm_fourcc.h by commit ae4df11a0f53 (drm: Move
* format-related helpers to drm_fourcc.c) in v4.8 (2016-06-09).
* drm_format_num_planes() has been removed by commit 05c452c115bf (drm: Remove
* users of drm_format_num_planes) in v5.3 (2019-05-16).
*
* drm_format_info() is available only from v4.10 (2016-10-18), added by commit
* 84770cc24f3a (drm: Centralize format information).
*/
#include <drm/drm_crtc.h>
#include <drm/drm_fourcc.h>
static inline int nv_drm_format_num_planes(uint32_t format)
{
#if defined(NV_DRM_FORMAT_NUM_PLANES_PRESENT)
return drm_format_num_planes(format);
#else
const struct drm_format_info *info = drm_format_info(format);
return info != NULL ? info->num_planes : 1;
#endif
}
#if defined(NV_DRM_FORMAT_MODIFIERS_PRESENT)
/*
* DRM_FORMAT_MOD_LINEAR was also defined after the original modifier support
* was added to the kernel, as a more explicit alias of DRM_FORMAT_MOD_NONE
*/
#if !defined(DRM_FORMAT_MOD_VENDOR_NONE)
#define DRM_FORMAT_MOD_VENDOR_NONE 0
#endif
#if !defined(DRM_FORMAT_MOD_LINEAR)
#define DRM_FORMAT_MOD_LINEAR fourcc_mod_code(NONE, 0)
#endif
/*
* DRM_FORMAT_MOD_INVALID was defined after the original modifier support was
* added to the kernel, for use as a sentinel value.
*/
#if !defined(DRM_FORMAT_RESERVED)
#define DRM_FORMAT_RESERVED ((1ULL << 56) - 1)
#endif
#if !defined(DRM_FORMAT_MOD_INVALID)
#define DRM_FORMAT_MOD_INVALID fourcc_mod_code(NONE, DRM_FORMAT_RESERVED)
#endif
/*
* DRM_FORMAT_MOD_VENDOR_NVIDIA was previously called
* DRM_FORMAT_MOD_VNEDOR_NV.
*/
#if !defined(DRM_FORMAT_MOD_VENDOR_NVIDIA)
#define DRM_FORMAT_MOD_VENDOR_NVIDIA DRM_FORMAT_MOD_VENDOR_NV
#endif
/*
* DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D is a relatively new addition to the
* upstream kernel headers compared to the other format modifiers.
*/
#if !defined(DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D)
#define DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(c, s, g, k, h) \
fourcc_mod_code(NVIDIA, (0x10 | \
((h) & 0xf) | \
(((k) & 0xff) << 12) | \
(((g) & 0x3) << 20) | \
(((s) & 0x1) << 22) | \
(((c) & 0x7) << 23)))
#endif
#endif /* defined(NV_DRM_FORMAT_MODIFIERS_PRESENT) */
/*
* drm_vma_offset_exact_lookup_locked() were added
* by kernel commit 2225cfe46bcc which was Signed-off-by:
* Daniel Vetter <daniel.vetter@intel.com>
*
* drm_vma_offset_exact_lookup_locked() were copied from
* include/drm/drm_vma_manager.h @ 2225cfe46bcc
* which has the following copyright and license information:
*
* Copyright (c) 2013 David Herrmann <dh.herrmann@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_vma_manager.h>
/**
* nv_drm_vma_offset_exact_lookup_locked() - Look up node by exact address
* @mgr: Manager object
* @start: Start address (page-based, not byte-based)
* @pages: Size of object (page-based)
*
* Same as drm_vma_offset_lookup_locked() but does not allow any offset into the node.
* It only returns the exact object with the given start address.
*
* RETURNS:
* Node at exact start address @start.
*/
static inline struct drm_vma_offset_node *
nv_drm_vma_offset_exact_lookup_locked(struct drm_vma_offset_manager *mgr,
unsigned long start,
unsigned long pages)
{
#if defined(NV_DRM_VMA_OFFSET_EXACT_LOOKUP_LOCKED_PRESENT)
return drm_vma_offset_exact_lookup_locked(mgr, start, pages);
#else
struct drm_vma_offset_node *node;
node = drm_vma_offset_lookup_locked(mgr, start, pages);
return (node && node->vm_node.start == start) ? node : NULL;
#endif
}
static inline bool
nv_drm_vma_node_is_allowed(struct drm_vma_offset_node *node,
struct file *filp)
{
#if defined(NV_DRM_VMA_NODE_IS_ALLOWED_HAS_TAG_ARG)
return drm_vma_node_is_allowed(node, filp->private_data);
#else
return drm_vma_node_is_allowed(node, filp);
#endif
}
#endif /* defined(NV_DRM_ATOMIC_MODESET_AVAILABLE) */
#endif /* defined(NV_DRM_AVAILABLE) */
#endif /* __NVIDIA_DRM_HELPER_H__ */

View File

@@ -0,0 +1,232 @@
/*
* Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _UAPI_NVIDIA_DRM_IOCTL_H_
#define _UAPI_NVIDIA_DRM_IOCTL_H_
#include <drm/drm.h>
/*
* We should do our best to keep these values constant. Any change to these will
* be backwards incompatible with client applications that might be using them
*/
#define DRM_NVIDIA_GET_CRTC_CRC32 0x00
#define DRM_NVIDIA_GEM_IMPORT_NVKMS_MEMORY 0x01
#define DRM_NVIDIA_GEM_IMPORT_USERSPACE_MEMORY 0x02
#define DRM_NVIDIA_GET_DEV_INFO 0x03
#define DRM_NVIDIA_FENCE_SUPPORTED 0x04
#define DRM_NVIDIA_FENCE_CONTEXT_CREATE 0x05
#define DRM_NVIDIA_GEM_FENCE_ATTACH 0x06
#define DRM_NVIDIA_GET_CLIENT_CAPABILITY 0x08
#define DRM_NVIDIA_GEM_EXPORT_NVKMS_MEMORY 0x09
#define DRM_NVIDIA_GEM_MAP_OFFSET 0x0a
#define DRM_NVIDIA_GEM_ALLOC_NVKMS_MEMORY 0x0b
#define DRM_NVIDIA_GET_CRTC_CRC32_V2 0x0c
#define DRM_NVIDIA_GEM_EXPORT_DMABUF_MEMORY 0x0d
#define DRM_NVIDIA_GEM_IDENTIFY_OBJECT 0x0e
#define DRM_IOCTL_NVIDIA_GEM_IMPORT_NVKMS_MEMORY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_IMPORT_NVKMS_MEMORY), \
struct drm_nvidia_gem_import_nvkms_memory_params)
#define DRM_IOCTL_NVIDIA_GEM_IMPORT_USERSPACE_MEMORY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_IMPORT_USERSPACE_MEMORY), \
struct drm_nvidia_gem_import_userspace_memory_params)
#define DRM_IOCTL_NVIDIA_GET_DEV_INFO \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GET_DEV_INFO), \
struct drm_nvidia_get_dev_info_params)
/*
* XXX Solaris compiler has issues with DRM_IO. None of this is supported on
* Solaris anyway, so just skip it.
*
* 'warning: suggest parentheses around arithmetic in operand of |'
*/
#if defined(NV_LINUX)
#define DRM_IOCTL_NVIDIA_FENCE_SUPPORTED \
DRM_IO(DRM_COMMAND_BASE + DRM_NVIDIA_FENCE_SUPPORTED)
#else
#define DRM_IOCTL_NVIDIA_FENCE_SUPPORTED 0
#endif
#define DRM_IOCTL_NVIDIA_FENCE_CONTEXT_CREATE \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_FENCE_CONTEXT_CREATE), \
struct drm_nvidia_fence_context_create_params)
#define DRM_IOCTL_NVIDIA_GEM_FENCE_ATTACH \
DRM_IOW((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_FENCE_ATTACH), \
struct drm_nvidia_gem_fence_attach_params)
#define DRM_IOCTL_NVIDIA_GET_CLIENT_CAPABILITY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GET_CLIENT_CAPABILITY), \
struct drm_nvidia_get_client_capability_params)
#define DRM_IOCTL_NVIDIA_GET_CRTC_CRC32 \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GET_CRTC_CRC32), \
struct drm_nvidia_get_crtc_crc32_params)
#define DRM_IOCTL_NVIDIA_GET_CRTC_CRC32_V2 \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GET_CRTC_CRC32_V2), \
struct drm_nvidia_get_crtc_crc32_v2_params)
#define DRM_IOCTL_NVIDIA_GEM_EXPORT_NVKMS_MEMORY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_EXPORT_NVKMS_MEMORY), \
struct drm_nvidia_gem_export_nvkms_memory_params)
#define DRM_IOCTL_NVIDIA_GEM_MAP_OFFSET \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_MAP_OFFSET), \
struct drm_nvidia_gem_map_offset_params)
#define DRM_IOCTL_NVIDIA_GEM_ALLOC_NVKMS_MEMORY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_ALLOC_NVKMS_MEMORY), \
struct drm_nvidia_gem_alloc_nvkms_memory_params)
#define DRM_IOCTL_NVIDIA_GEM_EXPORT_DMABUF_MEMORY \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_EXPORT_DMABUF_MEMORY), \
struct drm_nvidia_gem_export_dmabuf_memory_params)
#define DRM_IOCTL_NVIDIA_GEM_IDENTIFY_OBJECT \
DRM_IOWR((DRM_COMMAND_BASE + DRM_NVIDIA_GEM_IDENTIFY_OBJECT), \
struct drm_nvidia_gem_identify_object_params)
struct drm_nvidia_gem_import_nvkms_memory_params {
uint64_t mem_size; /* IN */
uint64_t nvkms_params_ptr; /* IN */
uint64_t nvkms_params_size; /* IN */
uint32_t handle; /* OUT */
uint32_t __pad;
};
struct drm_nvidia_gem_import_userspace_memory_params {
uint64_t size; /* IN Size of memory in bytes */
uint64_t address; /* IN Virtual address of userspace memory */
uint32_t handle; /* OUT Handle to gem object */
};
struct drm_nvidia_get_dev_info_params {
uint32_t gpu_id; /* OUT */
uint32_t primary_index; /* OUT; the "card%d" value */
/* See DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D definitions of these */
uint32_t generic_page_kind; /* OUT */
uint32_t page_kind_generation; /* OUT */
uint32_t sector_layout; /* OUT */
};
struct drm_nvidia_fence_context_create_params {
uint32_t handle; /* OUT GEM handle to fence context */
uint32_t index; /* IN Index of semaphore to use for fencing */
uint64_t size; /* IN Size of semaphore surface in bytes */
/* Params for importing userspace semaphore surface */
uint64_t import_mem_nvkms_params_ptr; /* IN */
uint64_t import_mem_nvkms_params_size; /* IN */
/* Params for creating software signaling event */
uint64_t event_nvkms_params_ptr; /* IN */
uint64_t event_nvkms_params_size; /* IN */
};
struct drm_nvidia_gem_fence_attach_params {
uint32_t handle; /* IN GEM handle to attach fence to */
uint32_t fence_context_handle; /* IN GEM handle to fence context on which fence is run on */
uint32_t sem_thresh; /* IN Semaphore value to reach before signal */
};
struct drm_nvidia_get_client_capability_params {
uint64_t capability; /* IN Client capability enum */
uint64_t value; /* OUT Client capability value */
};
/* Struct that stores Crc value and if it is supported by hardware */
struct drm_nvidia_crtc_crc32 {
uint32_t value; /* Read value, undefined if supported is false */
uint8_t supported; /* Supported boolean, true if readable by hardware */
};
struct drm_nvidia_crtc_crc32_v2_out {
struct drm_nvidia_crtc_crc32 compositorCrc32; /* OUT compositor hardware CRC32 value */
struct drm_nvidia_crtc_crc32 rasterGeneratorCrc32; /* OUT raster generator CRC32 value */
struct drm_nvidia_crtc_crc32 outputCrc32; /* OUT SF/SOR CRC32 value */
};
struct drm_nvidia_get_crtc_crc32_v2_params {
uint32_t crtc_id; /* IN CRTC identifier */
struct drm_nvidia_crtc_crc32_v2_out crc32; /* OUT Crc32 output structure */
};
struct drm_nvidia_get_crtc_crc32_params {
uint32_t crtc_id; /* IN CRTC identifier */
uint32_t crc32; /* OUT CRC32 value */
};
struct drm_nvidia_gem_export_nvkms_memory_params {
uint32_t handle; /* IN */
uint32_t __pad;
uint64_t nvkms_params_ptr; /* IN */
uint64_t nvkms_params_size; /* IN */
};
struct drm_nvidia_gem_map_offset_params {
uint32_t handle; /* IN Handle to gem object */
uint32_t __pad;
uint64_t offset; /* OUT Fake offset */
};
struct drm_nvidia_gem_alloc_nvkms_memory_params {
uint32_t handle; /* OUT */
uint8_t block_linear; /* IN */
uint8_t compressible; /* IN/OUT */
uint16_t __pad;
uint64_t memory_size; /* IN */
};
struct drm_nvidia_gem_export_dmabuf_memory_params {
uint32_t handle; /* IN GEM Handle*/
uint32_t __pad;
uint64_t nvkms_params_ptr; /* IN */
uint64_t nvkms_params_size; /* IN */
};
typedef enum {
NV_GEM_OBJECT_NVKMS,
NV_GEM_OBJECT_DMABUF,
NV_GEM_OBJECT_USERMEMORY,
NV_GEM_OBJECT_UNKNOWN = 0x7fffffff /* Force size of 32-bits. */
} drm_nvidia_gem_object_type;
struct drm_nvidia_gem_identify_object_params {
uint32_t handle; /* IN GEM handle*/
drm_nvidia_gem_object_type object_type; /* OUT GEM object type */
};
#endif /* _UAPI_NVIDIA_DRM_IOCTL_H_ */

View File

@@ -0,0 +1,189 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/err.h>
#include "nvidia-drm-os-interface.h"
#include "nvidia-drm.h"
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#include <linux/vmalloc.h>
#include "nv-mm.h"
MODULE_PARM_DESC(
modeset,
"Enable atomic kernel modesetting (1 = enable, 0 = disable (default))");
bool nv_drm_modeset_module_param = false;
module_param_named(modeset, nv_drm_modeset_module_param, bool, 0400);
void *nv_drm_calloc(size_t nmemb, size_t size)
{
return kzalloc(nmemb * size, GFP_KERNEL);
}
void nv_drm_free(void *ptr)
{
if (IS_ERR(ptr)) {
return;
}
kfree(ptr);
}
char *nv_drm_asprintf(const char *fmt, ...)
{
va_list ap;
char *p;
va_start(ap, fmt);
p = kvasprintf(GFP_KERNEL, fmt, ap);
va_end(ap);
return p;
}
#if defined(NVCPU_X86) || defined(NVCPU_X86_64)
#define WRITE_COMBINE_FLUSH() asm volatile("sfence":::"memory")
#elif defined(NVCPU_FAMILY_ARM)
#if defined(NVCPU_ARM)
#define WRITE_COMBINE_FLUSH() { dsb(); outer_sync(); }
#elif defined(NVCPU_AARCH64)
#define WRITE_COMBINE_FLUSH() mb()
#endif
#elif defined(NVCPU_PPC64LE)
#define WRITE_COMBINE_FLUSH() asm volatile("sync":::"memory")
#endif
void nv_drm_write_combine_flush(void)
{
WRITE_COMBINE_FLUSH();
}
int nv_drm_lock_user_pages(unsigned long address,
unsigned long pages_count, struct page ***pages)
{
struct mm_struct *mm = current->mm;
struct page **user_pages;
const int write = 1;
const int force = 0;
int pages_pinned;
user_pages = nv_drm_calloc(pages_count, sizeof(*user_pages));
if (user_pages == NULL) {
return -ENOMEM;
}
nv_mmap_read_lock(mm);
pages_pinned = NV_GET_USER_PAGES(address, pages_count, write, force,
user_pages, NULL);
nv_mmap_read_unlock(mm);
if (pages_pinned < 0 || (unsigned)pages_pinned < pages_count) {
goto failed;
}
*pages = user_pages;
return 0;
failed:
if (pages_pinned > 0) {
int i;
for (i = 0; i < pages_pinned; i++) {
put_page(user_pages[i]);
}
}
nv_drm_free(user_pages);
return (pages_pinned < 0) ? pages_pinned : -EINVAL;
}
void nv_drm_unlock_user_pages(unsigned long pages_count, struct page **pages)
{
unsigned long i;
for (i = 0; i < pages_count; i++) {
set_page_dirty_lock(pages[i]);
put_page(pages[i]);
}
nv_drm_free(pages);
}
void *nv_drm_vmap(struct page **pages, unsigned long pages_count)
{
return vmap(pages, pages_count, VM_USERMAP, PAGE_KERNEL);
}
void nv_drm_vunmap(void *address)
{
vunmap(address);
}
#endif /* NV_DRM_AVAILABLE */
/*************************************************************************
* Linux loading support code.
*************************************************************************/
static int __init nv_linux_drm_init(void)
{
return nv_drm_init();
}
static void __exit nv_linux_drm_exit(void)
{
nv_drm_exit();
}
module_init(nv_linux_drm_init);
module_exit(nv_linux_drm_exit);
#if defined(MODULE_LICENSE)
MODULE_LICENSE("Dual MIT/GPL");
#endif
#if defined(MODULE_INFO)
MODULE_INFO(supported, "external");
#endif
#if defined(MODULE_VERSION)
MODULE_VERSION(NV_VERSION_STRING);
#endif

View File

@@ -0,0 +1,577 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvidia-drm-priv.h"
#include "nvidia-drm-modeset.h"
#include "nvidia-drm-crtc.h"
#include "nvidia-drm-os-interface.h"
#include "nvidia-drm-helper.h"
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_VBLANK_H_PRESENT)
#include <drm/drm_vblank.h>
#endif
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
struct nv_drm_atomic_state {
struct NvKmsKapiRequestedModeSetConfig config;
struct drm_atomic_state base;
};
static inline struct nv_drm_atomic_state *to_nv_atomic_state(
struct drm_atomic_state *state)
{
return container_of(state, struct nv_drm_atomic_state, base);
}
struct drm_atomic_state *nv_drm_atomic_state_alloc(struct drm_device *dev)
{
struct nv_drm_atomic_state *nv_state =
nv_drm_calloc(1, sizeof(*nv_state));
if (nv_state == NULL || drm_atomic_state_init(dev, &nv_state->base) < 0) {
nv_drm_free(nv_state);
return NULL;
}
return &nv_state->base;
}
void nv_drm_atomic_state_clear(struct drm_atomic_state *state)
{
drm_atomic_state_default_clear(state);
}
void nv_drm_atomic_state_free(struct drm_atomic_state *state)
{
struct nv_drm_atomic_state *nv_state =
to_nv_atomic_state(state);
drm_atomic_state_default_release(state);
nv_drm_free(nv_state);
}
/**
* __will_generate_flip_event - Check whether event is going to be generated by
* hardware when it flips from old crtc/plane state to current one. This
* function is called after drm_atomic_helper_swap_state(), therefore new state
* is swapped into current state.
*/
static bool __will_generate_flip_event(struct drm_crtc *crtc,
struct drm_crtc_state *old_crtc_state)
{
struct drm_crtc_state *new_crtc_state = crtc->state;
struct nv_drm_crtc_state *nv_new_crtc_state =
to_nv_crtc_state(new_crtc_state);
struct drm_plane_state *old_plane_state = NULL;
struct drm_plane *plane = NULL;
struct drm_plane *primary_plane = crtc->primary;
bool primary_event = false;
bool overlay_event = false;
int i;
if (!old_crtc_state->active && !new_crtc_state->active) {
/*
* crtc is not active in old and new states therefore all planes are
* disabled, hardware can not generate flip events.
*/
return false;
}
/* Find out whether primary & overlay flip done events will be generated. */
nv_drm_for_each_plane_in_state(old_crtc_state->state,
plane, old_plane_state, i) {
if (old_plane_state->crtc != crtc) {
continue;
}
if (plane->type == DRM_PLANE_TYPE_CURSOR) {
continue;
}
/*
* Hardware generates flip event for only those
* planes which were active previously.
*/
if (old_crtc_state->active && old_plane_state->fb != NULL) {
nv_new_crtc_state->nv_flip->pending_events++;
}
}
return nv_new_crtc_state->nv_flip->pending_events != 0;
}
static int __nv_drm_put_back_post_fence_fd(
struct nv_drm_plane_state *plane_state,
const struct NvKmsKapiLayerReplyConfig *layer_reply_config)
{
int fd = layer_reply_config->postSyncptFd;
if ((fd >= 0) && (plane_state->fd_user_ptr != NULL)) {
if (put_user(fd, plane_state->fd_user_ptr)) {
return -EFAULT;
}
/*! set back to Null and let set_property specify it again */
plane_state->fd_user_ptr = NULL;
}
return 0;
}
static int __nv_drm_get_syncpt_data(
struct nv_drm_device *nv_dev,
struct drm_crtc *crtc,
struct drm_crtc_state *old_crtc_state,
struct NvKmsKapiRequestedModeSetConfig *requested_config,
struct NvKmsKapiModeSetReplyConfig *reply_config)
{
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
struct NvKmsKapiHeadReplyConfig *head_reply_config;
struct nv_drm_plane_state *plane_state;
struct drm_crtc_state *new_crtc_state = crtc->state;
struct drm_plane_state *old_plane_state = NULL;
struct drm_plane_state *new_plane_state = NULL;
struct drm_plane *plane = NULL;
int i, ret;
if (!old_crtc_state->active && !new_crtc_state->active) {
/*
* crtc is not active in old and new states therefore all planes are
* disabled, exit early.
*/
return 0;
}
head_reply_config = &reply_config->headReplyConfig[nv_crtc->head];
nv_drm_for_each_plane_in_state(old_crtc_state->state, plane, old_plane_state, i) {
struct nv_drm_plane *nv_plane = to_nv_plane(plane);
if (plane->type == DRM_PLANE_TYPE_CURSOR || old_plane_state->crtc != crtc) {
continue;
}
new_plane_state = plane->state;
if (new_plane_state->crtc != crtc) {
continue;
}
plane_state = to_nv_drm_plane_state(new_plane_state);
ret = __nv_drm_put_back_post_fence_fd(
plane_state,
&head_reply_config->layerReplyConfig[nv_plane->layer_idx]);
if (ret != 0) {
return ret;
}
}
return 0;
}
/**
* nv_drm_atomic_commit - validate/commit modeset config
* @dev: DRM device
* @state: atomic state tracking atomic update
* @commit: commit/check modeset config associated with atomic update
*
* @state tracks atomic update and modeset objects affected
* by the atomic update, but the state of the modeset objects it contains
* depends on the current stage of the update.
* At the commit stage, the proposed state is already stored in the current
* state, and @state contains old state for all affected modeset objects.
* At the check/validation stage, @state contains the proposed state for
* all affected objects.
*
* Sequence of atomic update -
* 1. The check/validation of proposed atomic state,
* 2. Do any other steps that might fail,
* 3. Put the proposed state into the current state pointers,
* 4. Actually commit the hardware state,
* 5. Cleanup old state.
*
* The function nv_drm_atomic_apply_modeset_config() is getting called
* at stages (1) and (4) after drm_atomic_helper_swap_state().
*/
static int
nv_drm_atomic_apply_modeset_config(struct drm_device *dev,
struct drm_atomic_state *state,
bool commit)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct NvKmsKapiRequestedModeSetConfig *requested_config =
&(to_nv_atomic_state(state)->config);
struct NvKmsKapiModeSetReplyConfig reply_config = { };
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
int i;
int ret;
memset(requested_config, 0, sizeof(*requested_config));
/* Loop over affected crtcs and construct NvKmsKapiRequestedModeSetConfig */
nv_drm_for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* When committing a state, the new state is already stored in
* crtc->state. When checking a proposed state, the proposed state is
* stored in crtc_state.
*/
struct drm_crtc_state *new_crtc_state =
commit ? crtc->state : crtc_state;
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
requested_config->headRequestedConfig[nv_crtc->head] =
to_nv_crtc_state(new_crtc_state)->req_config;
requested_config->headsMask |= 1 << nv_crtc->head;
if (commit) {
struct drm_crtc_state *old_crtc_state = crtc_state;
struct nv_drm_crtc_state *nv_new_crtc_state =
to_nv_crtc_state(new_crtc_state);
nv_new_crtc_state->nv_flip->event = new_crtc_state->event;
nv_new_crtc_state->nv_flip->pending_events = 0;
new_crtc_state->event = NULL;
/*
* If flip event will be generated by hardware
* then defer flip object processing to flip event from hardware.
*/
if (__will_generate_flip_event(crtc, old_crtc_state)) {
nv_drm_crtc_enqueue_flip(nv_crtc,
nv_new_crtc_state->nv_flip);
nv_new_crtc_state->nv_flip = NULL;
}
}
}
if (commit && nvKms->systemInfo.bAllowWriteCombining) {
/*
* XXX This call is required only if dumb buffer is going
* to be presented.
*/
nv_drm_write_combine_flush();
}
if (!nvKms->applyModeSetConfig(nv_dev->pDevice,
requested_config,
&reply_config,
commit)) {
return -EINVAL;
}
if (commit && nv_dev->supportsSyncpts) {
nv_drm_for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*! loop over affected crtcs and get NvKmsKapiModeSetReplyConfig */
ret = __nv_drm_get_syncpt_data(
nv_dev, crtc, crtc_state, requested_config, &reply_config);
if (ret != 0) {
return ret;
}
}
}
return 0;
}
int nv_drm_atomic_check(struct drm_device *dev,
struct drm_atomic_state *state)
{
int ret = 0;
if ((ret = drm_atomic_helper_check(dev, state)) != 0) {
goto done;
}
ret = nv_drm_atomic_apply_modeset_config(dev,
state, false /* commit */);
done:
return ret;
}
/**
* __nv_drm_handle_flip_event - handle flip occurred event
* @nv_crtc: crtc on which flip has been occurred
*
* This handler dequeues the first nv_drm_flip from the crtc's flip_list,
* generates an event if requested at flip time, and frees the nv_drm_flip.
*/
static void __nv_drm_handle_flip_event(struct nv_drm_crtc *nv_crtc)
{
struct drm_device *dev = nv_crtc->base.dev;
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct nv_drm_flip *nv_flip;
/*
* Acquire event_lock before nv_flip object dequeue, otherwise immediate
* flip event delivery from nv_drm_atomic_commit() races ahead and
* messes up with event delivery order.
*/
spin_lock(&dev->event_lock);
nv_flip = nv_drm_crtc_dequeue_flip(nv_crtc);
if (likely(nv_flip != NULL)) {
struct nv_drm_flip *nv_deferred_flip, *nv_next_deferred_flip;
if (nv_flip->event != NULL) {
drm_crtc_send_vblank_event(&nv_crtc->base, nv_flip->event);
}
/*
* Process flips that were deferred until processing of this nv_flip
* object.
*/
list_for_each_entry_safe(nv_deferred_flip,
nv_next_deferred_flip,
&nv_flip->deferred_flip_list, list_entry) {
if (nv_deferred_flip->event != NULL) {
drm_crtc_send_vblank_event(&nv_crtc->base,
nv_deferred_flip->event);
}
list_del(&nv_deferred_flip->list_entry);
nv_drm_free(nv_deferred_flip);
}
}
spin_unlock(&dev->event_lock);
wake_up_all(&nv_dev->flip_event_wq);
nv_drm_free(nv_flip);
}
int nv_drm_atomic_commit(struct drm_device *dev,
struct drm_atomic_state *state,
bool nonblock)
{
int ret = -EBUSY;
int i;
struct drm_crtc *crtc = NULL;
struct drm_crtc_state *crtc_state = NULL;
struct nv_drm_device *nv_dev = to_nv_device(dev);
/*
* drm_mode_config_funcs::atomic_commit() mandates to return -EBUSY
* for nonblocking commit if previous updates (commit tasks/flip event) are
* pending. In case of blocking commits it mandates to wait for previous
* updates to complete.
*/
if (nonblock) {
nv_drm_for_each_crtc_in_state(state, crtc, crtc_state, i) {
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
/*
* Here you aren't required to hold nv_drm_crtc::flip_list_lock
* because:
*
* The core DRM driver acquires lock for all affected crtcs before
* calling into ->commit() hook, therefore it is not possible for
* other threads to call into ->commit() hook affecting same crtcs
* and enqueue flip objects into flip_list -
*
* nv_drm_atomic_commit_internal()
* |-> nv_drm_atomic_apply_modeset_config(commit=true)
* |-> nv_drm_crtc_enqueue_flip()
*
* Only possibility is list_empty check races with code path
* dequeuing flip object -
*
* __nv_drm_handle_flip_event()
* |-> nv_drm_crtc_dequeue_flip()
*
* But this race condition can't lead list_empty() to return
* incorrect result. nv_drm_crtc_dequeue_flip() in the middle of
* updating the list could not trick us into thinking the list is
* empty when it isn't.
*/
if (!list_empty(&nv_crtc->flip_list)) {
return -EBUSY;
}
}
}
#if defined(NV_DRM_ATOMIC_HELPER_SWAP_STATE_HAS_STALL_ARG)
/*
* nv_drm_atomic_commit_internal()
* implements blocking/non-blocking atomic commit using
* nv_drm_crtc::flip_list, it does not require any help from core DRM
* helper functions to stall commit processing. Therefore passing false to
* 'stall' parameter.
* In this context, failure from drm_atomic_helper_swap_state() is not
* expected.
*/
#if defined(NV_DRM_ATOMIC_HELPER_SWAP_STATE_RETURN_INT)
ret = drm_atomic_helper_swap_state(state, false /* stall */);
if (WARN_ON(ret != 0)) {
return ret;
}
#else
drm_atomic_helper_swap_state(state, false /* stall */);
#endif
#else
drm_atomic_helper_swap_state(dev, state);
#endif
/*
* nv_drm_atomic_commit_internal() must not return failure after
* calling drm_atomic_helper_swap_state().
*/
if ((ret = nv_drm_atomic_apply_modeset_config(
dev,
state, true /* commit */)) != 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to apply atomic modeset. Error code: %d",
ret);
goto done;
}
nv_drm_for_each_crtc_in_state(state, crtc, crtc_state, i) {
struct nv_drm_crtc *nv_crtc = to_nv_crtc(crtc);
struct nv_drm_crtc_state *nv_new_crtc_state =
to_nv_crtc_state(crtc->state);
/*
* If nv_drm_atomic_apply_modeset_config() hasn't consumed the flip
* object, no event will be generated for this flip, and we need process
* it:
*/
if (nv_new_crtc_state->nv_flip != NULL) {
/*
* First, defer processing of all pending flips for this crtc until
* last flip in the queue has been processed. This is to ensure a
* correct order in event delivery.
*/
spin_lock(&nv_crtc->flip_list_lock);
if (!list_empty(&nv_crtc->flip_list)) {
struct nv_drm_flip *nv_last_flip =
list_last_entry(&nv_crtc->flip_list,
struct nv_drm_flip, list_entry);
list_add(&nv_new_crtc_state->nv_flip->list_entry,
&nv_last_flip->deferred_flip_list);
nv_new_crtc_state->nv_flip = NULL;
}
spin_unlock(&nv_crtc->flip_list_lock);
}
if (nv_new_crtc_state->nv_flip != NULL) {
/*
* Then, if no more pending flips for this crtc, deliver event for the
* current flip.
*/
if (nv_new_crtc_state->nv_flip->event != NULL) {
spin_lock(&dev->event_lock);
drm_crtc_send_vblank_event(crtc,
nv_new_crtc_state->nv_flip->event);
spin_unlock(&dev->event_lock);
}
nv_drm_free(nv_new_crtc_state->nv_flip);
nv_new_crtc_state->nv_flip = NULL;
}
if (!nonblock) {
/*
* Here you aren't required to hold nv_drm_crtc::flip_list_lock
* because:
*
* The core DRM driver acquires lock for all affected crtcs before
* calling into ->commit() hook, therefore it is not possible for
* other threads to call into ->commit() hook affecting same crtcs
* and enqueue flip objects into flip_list -
*
* nv_drm_atomic_commit_internal()
* |-> nv_drm_atomic_apply_modeset_config(commit=true)
* |-> nv_drm_crtc_enqueue_flip()
*
* Only possibility is list_empty check races with code path
* dequeuing flip object -
*
* __nv_drm_handle_flip_event()
* |-> nv_drm_crtc_dequeue_flip()
*
* But this race condition can't lead list_empty() to return
* incorrect result. nv_drm_crtc_dequeue_flip() in the middle of
* updating the list could not trick us into thinking the list is
* empty when it isn't.
*/
if (wait_event_timeout(
nv_dev->flip_event_wq,
list_empty(&nv_crtc->flip_list),
3 * HZ /* 3 second */) == 0) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Flip event timeout on head %u", nv_crtc->head);
}
}
}
done:
#if defined(NV_DRM_ATOMIC_STATE_REF_COUNTING_PRESENT)
/*
* If ref counting is present, state will be freed when the caller
* drops its reference after we return.
*/
#else
drm_atomic_state_free(state);
#endif
return 0;
}
void nv_drm_handle_flip_occurred(struct nv_drm_device *nv_dev,
NvU32 head, NvU32 plane)
{
struct nv_drm_crtc *nv_crtc = nv_drm_crtc_lookup(nv_dev, head);
if (NV_DRM_WARN(nv_crtc == NULL)) {
return;
}
__nv_drm_handle_flip_event(nv_crtc);
}
#endif

View File

@@ -0,0 +1,53 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_MODESET_H__
#define __NVIDIA_DRM_MODESET_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvkms-kapi.h"
struct drm_device;
struct drm_atomic_state;
struct drm_atomic_state *nv_drm_atomic_state_alloc(struct drm_device *dev);
void nv_drm_atomic_state_clear(struct drm_atomic_state *state);
void nv_drm_atomic_state_free(struct drm_atomic_state *state);
int nv_drm_atomic_check(struct drm_device *dev,
struct drm_atomic_state *state);
int nv_drm_atomic_commit(struct drm_device *dev,
struct drm_atomic_state *state, bool nonblock);
void nv_drm_handle_flip_occurred(struct nv_drm_device *nv_dev,
NvU32 head, NvU32 plane);
int nv_drm_shut_down_all_crtcs(struct drm_device *dev);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_MODESET_H__ */

View File

@@ -0,0 +1,56 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_OS_INTERFACE_H__
#define __NVIDIA_DRM_OS_INTERFACE_H__
#include "nvidia-drm-conftest.h" /* NV_DRM_AVAILABLE */
#include "nvtypes.h"
#if defined(NV_DRM_AVAILABLE)
struct page;
/* Set to true when the atomic modeset feature is enabled. */
extern bool nv_drm_modeset_module_param;
void *nv_drm_calloc(size_t nmemb, size_t size);
void nv_drm_free(void *ptr);
char *nv_drm_asprintf(const char *fmt, ...);
void nv_drm_write_combine_flush(void);
int nv_drm_lock_user_pages(unsigned long address,
unsigned long pages_count, struct page ***pages);
void nv_drm_unlock_user_pages(unsigned long pages_count, struct page **pages);
void *nv_drm_vmap(struct page **pages, unsigned long pages_count);
void nv_drm_vunmap(void *address);
#endif
#endif /* __NVIDIA_DRM_OS_INTERFACE_H__ */

View File

@@ -0,0 +1,518 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#include "nvidia-drm-priv.h"
#include "nvidia-drm-ioctl.h"
#include "nvidia-drm-gem.h"
#include "nvidia-drm-prime-fence.h"
#include "nvidia-dma-resv-helper.h"
#if defined(NV_DRM_FENCE_AVAILABLE)
#include "nvidia-dma-fence-helper.h"
struct nv_drm_fence_context {
struct nv_drm_device *nv_dev;
uint32_t context;
NvU64 fenceSemIndex; /* Index into semaphore surface */
/* Mapped semaphore surface */
struct NvKmsKapiMemory *pSemSurface;
NvU32 *pLinearAddress;
/* Protects nv_drm_fence_context::{pending, last_seqno} */
spinlock_t lock;
/*
* Software signaling structures. __nv_drm_fence_context_new()
* allocates channel event and __nv_drm_fence_context_destroy() frees it.
* There are no simultaneous read/write access to 'cb', therefore it does
* not require spin-lock protection.
*/
struct NvKmsKapiChannelEvent *cb;
/* List of pending fences which are not yet signaled */
struct list_head pending;
unsigned last_seqno;
};
struct nv_drm_prime_fence {
struct list_head list_entry;
nv_dma_fence_t base;
spinlock_t lock;
};
static inline
struct nv_drm_prime_fence *to_nv_drm_prime_fence(nv_dma_fence_t *fence)
{
return container_of(fence, struct nv_drm_prime_fence, base);
}
static const char*
nv_drm_gem_prime_fence_op_get_driver_name(nv_dma_fence_t *fence)
{
return "NVIDIA";
}
static const char*
nv_drm_gem_prime_fence_op_get_timeline_name(nv_dma_fence_t *fence)
{
return "nvidia.prime";
}
static bool nv_drm_gem_prime_fence_op_enable_signaling(nv_dma_fence_t *fence)
{
// DO NOTHING
return true;
}
static void nv_drm_gem_prime_fence_op_release(nv_dma_fence_t *fence)
{
struct nv_drm_prime_fence *nv_fence = to_nv_drm_prime_fence(fence);
nv_drm_free(nv_fence);
}
static signed long
nv_drm_gem_prime_fence_op_wait(nv_dma_fence_t *fence,
bool intr, signed long timeout)
{
/*
* If the waiter requests to wait with no timeout, force a timeout to ensure
* that it won't get stuck forever in the kernel if something were to go
* wrong with signaling, such as a malicious userspace not releasing the
* semaphore.
*
* 96 ms (roughly 6 frames @ 60 Hz) is arbitrarily chosen to be long enough
* that it should never get hit during normal operation, but not so long
* that the system becomes unresponsive.
*/
return nv_dma_fence_default_wait(fence, intr,
(timeout == MAX_SCHEDULE_TIMEOUT) ?
msecs_to_jiffies(96) : timeout);
}
static const nv_dma_fence_ops_t nv_drm_gem_prime_fence_ops = {
.get_driver_name = nv_drm_gem_prime_fence_op_get_driver_name,
.get_timeline_name = nv_drm_gem_prime_fence_op_get_timeline_name,
.enable_signaling = nv_drm_gem_prime_fence_op_enable_signaling,
.release = nv_drm_gem_prime_fence_op_release,
.wait = nv_drm_gem_prime_fence_op_wait,
};
static inline void
__nv_drm_prime_fence_signal(struct nv_drm_prime_fence *nv_fence)
{
list_del(&nv_fence->list_entry);
nv_dma_fence_signal(&nv_fence->base);
nv_dma_fence_put(&nv_fence->base);
}
static void nv_drm_gem_prime_force_fence_signal(
struct nv_drm_fence_context *nv_fence_context)
{
WARN_ON(!spin_is_locked(&nv_fence_context->lock));
while (!list_empty(&nv_fence_context->pending)) {
struct nv_drm_prime_fence *nv_fence = list_first_entry(
&nv_fence_context->pending,
typeof(*nv_fence),
list_entry);
__nv_drm_prime_fence_signal(nv_fence);
}
}
static void nv_drm_gem_prime_fence_event
(
void *dataPtr,
NvU32 dataU32
)
{
struct nv_drm_fence_context *nv_fence_context = dataPtr;
spin_lock(&nv_fence_context->lock);
while (!list_empty(&nv_fence_context->pending)) {
struct nv_drm_prime_fence *nv_fence = list_first_entry(
&nv_fence_context->pending,
typeof(*nv_fence),
list_entry);
/* Index into surface with 16 byte stride */
unsigned int seqno = *((nv_fence_context->pLinearAddress) +
(nv_fence_context->fenceSemIndex * 4));
if (nv_fence->base.seqno > seqno) {
/*
* Fences in list are placed in increasing order of sequence
* number, breaks a loop once found first fence not
* ready to signal.
*/
break;
}
__nv_drm_prime_fence_signal(nv_fence);
}
spin_unlock(&nv_fence_context->lock);
}
static inline struct nv_drm_fence_context *__nv_drm_fence_context_new(
struct nv_drm_device *nv_dev,
struct drm_nvidia_fence_context_create_params *p)
{
struct nv_drm_fence_context *nv_fence_context;
struct NvKmsKapiMemory *pSemSurface;
NvU32 *pLinearAddress;
/* Allocate backup nvkms resources */
pSemSurface = nvKms->importMemory(nv_dev->pDevice,
p->size,
p->import_mem_nvkms_params_ptr,
p->import_mem_nvkms_params_size);
if (!pSemSurface) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to import fence semaphore surface");
goto failed;
}
if (!nvKms->mapMemory(nv_dev->pDevice,
pSemSurface,
NVKMS_KAPI_MAPPING_TYPE_KERNEL,
(void **) &pLinearAddress)) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to map fence semaphore surface");
goto failed_to_map_memory;
}
/*
* Allocate a fence context object, initialize it and allocate channel
* event for it.
*/
if ((nv_fence_context = nv_drm_calloc(
1,
sizeof(*nv_fence_context))) == NULL) {
goto failed_alloc_fence_context;
}
/*
* nv_dma_fence_context_alloc() cannot fail, so we do not need
* to check a return value.
*/
*nv_fence_context = (struct nv_drm_fence_context) {
.nv_dev = nv_dev,
.context = nv_dma_fence_context_alloc(1),
.pSemSurface = pSemSurface,
.pLinearAddress = pLinearAddress,
.fenceSemIndex = p->index,
};
INIT_LIST_HEAD(&nv_fence_context->pending);
spin_lock_init(&nv_fence_context->lock);
/*
* Except 'cb', the fence context should be completely initialized
* before channel event allocation because the fence context may start
* receiving events immediately after allocation.
*
* There are no simultaneous read/write access to 'cb', therefore it does
* not require spin-lock protection.
*/
nv_fence_context->cb =
nvKms->allocateChannelEvent(nv_dev->pDevice,
nv_drm_gem_prime_fence_event,
nv_fence_context,
p->event_nvkms_params_ptr,
p->event_nvkms_params_size);
if (!nv_fence_context->cb) {
NV_DRM_DEV_LOG_ERR(nv_dev,
"Failed to allocate fence signaling event");
goto failed_to_allocate_channel_event;
}
return nv_fence_context;
failed_to_allocate_channel_event:
nv_drm_free(nv_fence_context);
failed_alloc_fence_context:
nvKms->unmapMemory(nv_dev->pDevice,
pSemSurface,
NVKMS_KAPI_MAPPING_TYPE_KERNEL,
(void *) pLinearAddress);
failed_to_map_memory:
nvKms->freeMemory(nv_dev->pDevice, pSemSurface);
failed:
return NULL;
}
static void __nv_drm_fence_context_destroy(
struct nv_drm_fence_context *nv_fence_context)
{
struct nv_drm_device *nv_dev = nv_fence_context->nv_dev;
/*
* Free channel event before destroying the fence context, otherwise event
* callback continue to get called.
*/
nvKms->freeChannelEvent(nv_dev->pDevice, nv_fence_context->cb);
/* Force signal all pending fences and empty pending list */
spin_lock(&nv_fence_context->lock);
nv_drm_gem_prime_force_fence_signal(nv_fence_context);
spin_unlock(&nv_fence_context->lock);
/* Free nvkms resources */
nvKms->unmapMemory(nv_dev->pDevice,
nv_fence_context->pSemSurface,
NVKMS_KAPI_MAPPING_TYPE_KERNEL,
(void *) nv_fence_context->pLinearAddress);
nvKms->freeMemory(nv_dev->pDevice, nv_fence_context->pSemSurface);
nv_drm_free(nv_fence_context);
}
static nv_dma_fence_t *__nv_drm_fence_context_create_fence(
struct nv_drm_fence_context *nv_fence_context,
unsigned int seqno)
{
struct nv_drm_prime_fence *nv_fence;
int ret = 0;
if ((nv_fence = nv_drm_calloc(1, sizeof(*nv_fence))) == NULL) {
ret = -ENOMEM;
goto out;
}
spin_lock(&nv_fence_context->lock);
/*
* If seqno wrapped, force signal fences to make sure none of them
* get stuck.
*/
if (seqno < nv_fence_context->last_seqno) {
nv_drm_gem_prime_force_fence_signal(nv_fence_context);
}
INIT_LIST_HEAD(&nv_fence->list_entry);
spin_lock_init(&nv_fence->lock);
nv_dma_fence_init(&nv_fence->base, &nv_drm_gem_prime_fence_ops,
&nv_fence->lock, nv_fence_context->context,
seqno);
list_add_tail(&nv_fence->list_entry, &nv_fence_context->pending);
nv_fence_context->last_seqno = seqno;
spin_unlock(&nv_fence_context->lock);
out:
return ret != 0 ? ERR_PTR(ret) : &nv_fence->base;
}
int nv_drm_fence_supported_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
return nv_dev->pDevice ? 0 : -EINVAL;
}
struct nv_drm_gem_fence_context {
struct nv_drm_gem_object base;
struct nv_drm_fence_context *nv_fence_context;
};
static inline struct nv_drm_gem_fence_context *to_gem_fence_context(
struct nv_drm_gem_object *nv_gem)
{
if (nv_gem != NULL) {
return container_of(nv_gem, struct nv_drm_gem_fence_context, base);
}
return NULL;
}
/*
* Tear down of the 'struct nv_drm_gem_fence_context' object is not expected
* to be happen from any worker thread, if that happen it causes dead-lock
* because tear down sequence calls to flush all existing
* worker thread.
*/
static void __nv_drm_gem_fence_context_free(struct nv_drm_gem_object *nv_gem)
{
struct nv_drm_gem_fence_context *nv_gem_fence_context =
to_gem_fence_context(nv_gem);
__nv_drm_fence_context_destroy(nv_gem_fence_context->nv_fence_context);
nv_drm_free(nv_gem_fence_context);
}
const struct nv_drm_gem_object_funcs nv_gem_fence_context_ops = {
.free = __nv_drm_gem_fence_context_free,
};
static inline
struct nv_drm_gem_fence_context *__nv_drm_gem_object_fence_context_lookup(
struct drm_device *dev,
struct drm_file *filp,
u32 handle)
{
struct nv_drm_gem_object *nv_gem =
nv_drm_gem_object_lookup(dev, filp, handle);
if (nv_gem != NULL && nv_gem->ops != &nv_gem_fence_context_ops) {
nv_drm_gem_object_unreference_unlocked(nv_gem);
return NULL;
}
return to_gem_fence_context(nv_gem);
}
int nv_drm_fence_context_create_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_fence_context_create_params *p = data;
struct nv_drm_gem_fence_context *nv_gem_fence_context = NULL;
if ((nv_gem_fence_context = nv_drm_calloc(
1,
sizeof(struct nv_drm_gem_fence_context))) == NULL) {
goto done;
}
if ((nv_gem_fence_context->nv_fence_context =
__nv_drm_fence_context_new(nv_dev, p)) == NULL) {
goto fence_context_new_failed;
}
nv_drm_gem_object_init(nv_dev,
&nv_gem_fence_context->base,
&nv_gem_fence_context_ops,
0 /* size */,
NULL /* pMemory */);
return nv_drm_gem_handle_create_drop_reference(filep,
&nv_gem_fence_context->base,
&p->handle);
fence_context_new_failed:
nv_drm_free(nv_gem_fence_context);
done:
return -ENOMEM;
}
int nv_drm_gem_fence_attach_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep)
{
int ret = -EINVAL;
struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_gem_fence_attach_params *p = data;
struct nv_drm_gem_object *nv_gem;
struct nv_drm_gem_fence_context *nv_gem_fence_context;
nv_dma_fence_t *fence;
nv_gem = nv_drm_gem_object_lookup(nv_dev->dev, filep, p->handle);
if (!nv_gem) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup gem object for fence attach: 0x%08x",
p->handle);
goto done;
}
if((nv_gem_fence_context = __nv_drm_gem_object_fence_context_lookup(
nv_dev->dev,
filep,
p->fence_context_handle)) == NULL) {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to lookup gem object for fence context: 0x%08x",
p->fence_context_handle);
goto fence_context_lookup_failed;
}
if (IS_ERR(fence = __nv_drm_fence_context_create_fence(
nv_gem_fence_context->nv_fence_context,
p->sem_thresh))) {
ret = PTR_ERR(fence);
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to allocate fence: 0x%08x", p->handle);
goto fence_context_create_fence_failed;
}
nv_dma_resv_add_excl_fence(&nv_gem->resv, fence);
ret = 0;
fence_context_create_fence_failed:
nv_drm_gem_object_unreference_unlocked(&nv_gem_fence_context->base);
fence_context_lookup_failed:
nv_drm_gem_object_unreference_unlocked(nv_gem);
done:
return ret;
}
#endif /* NV_DRM_FENCE_AVAILABLE */
#endif /* NV_DRM_AVAILABLE */

View File

@@ -0,0 +1,48 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_PRIME_FENCE_H__
#define __NVIDIA_DRM_PRIME_FENCE_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_AVAILABLE)
struct drm_file;
struct drm_device;
#if defined(NV_DRM_FENCE_AVAILABLE)
int nv_drm_fence_supported_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_fence_context_create_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
int nv_drm_gem_fence_attach_ioctl(struct drm_device *dev,
void *data, struct drm_file *filep);
#endif /* NV_DRM_FENCE_AVAILABLE */
#endif /* NV_DRM_AVAILABLE */
#endif /* __NVIDIA_DRM_PRIME_FENCE_H__ */

View File

@@ -0,0 +1,139 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_PRIV_H__
#define __NVIDIA_DRM_PRIV_H__
#include "nvidia-drm-conftest.h" /* NV_DRM_AVAILABLE */
#if defined(NV_DRM_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_DEVICE_H_PRESENT)
#include <drm/drm_device.h>
#endif
#if defined(NV_DRM_DRM_GEM_H_PRESENT)
#include <drm/drm_gem.h>
#endif
#include "nvidia-drm-os-interface.h"
#include "nvkms-kapi.h"
#define NV_DRM_LOG_ERR(__fmt, ...) \
DRM_ERROR("[nvidia-drm] " __fmt "\n", ##__VA_ARGS__)
#define NV_DRM_LOG_INFO(__fmt, ...) \
DRM_INFO("[nvidia-drm] " __fmt "\n", ##__VA_ARGS__)
#define NV_DRM_DEV_LOG_INFO(__dev, __fmt, ...) \
NV_DRM_LOG_INFO("[GPU ID 0x%08x] " __fmt, __dev->gpu_info.gpu_id, ##__VA_ARGS__)
#define NV_DRM_DEV_LOG_ERR(__dev, __fmt, ...) \
NV_DRM_LOG_ERR("[GPU ID 0x%08x] " __fmt, __dev->gpu_info.gpu_id, ##__VA_ARGS__)
#define NV_DRM_WARN(__condition) WARN_ON((__condition))
#define NV_DRM_DEBUG_DRIVER(__fmt, ...) \
DRM_DEBUG_DRIVER("[nvidia-drm] " __fmt "\n", ##__VA_ARGS__)
#define NV_DRM_DEV_DEBUG_DRIVER(__dev, __fmt, ...) \
DRM_DEBUG_DRIVER("[GPU ID 0x%08x] " __fmt, \
__dev->gpu_info.gpu_id, ##__VA_ARGS__)
struct nv_drm_device {
nv_gpu_info_t gpu_info;
struct drm_device *dev;
struct NvKmsKapiDevice *pDevice;
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
/*
* Lock to protect drm-subsystem and fields of this structure
* from concurrent access.
*
* Do not hold this lock if some lock from core drm-subsystem
* is already held, locking order should be like this -
*
* mutex_lock(nv_drm_device::lock);
* ....
* mutex_lock(drm_device::mode_config::lock);
* ....
* .......
* mutex_unlock(drm_device::mode_config::lock);
* ........
* ..
* mutex_lock(drm_device::struct_mutex);
* ....
* ........
* mutex_unlock(drm_device::struct_mutex);
* ..
* mutex_unlock(nv_drm_device::lock);
*/
struct mutex lock;
NvU32 pitchAlignment;
NvU8 genericPageKind;
NvU8 pageKindGeneration;
NvU8 sectorLayout;
#if defined(NV_DRM_FORMAT_MODIFIERS_PRESENT)
NvU64 modifiers[6 /* block linear */ + 1 /* linear */ + 1 /* terminator */];
#endif
atomic_t enable_event_handling;
/**
* @flip_event_wq:
*
* The wait queue on which nv_drm_atomic_commit_internal() sleeps until
* next flip event occurs.
*/
wait_queue_head_t flip_event_wq;
#endif
NvBool hasVideoMemory;
NvBool supportsSyncpts;
struct drm_property *nv_out_fence_property;
struct nv_drm_device *next;
};
static inline struct nv_drm_device *to_nv_device(
struct drm_device *dev)
{
return dev->dev_private;
}
extern const struct NvKmsKapiFunctionsTable* const nvKms;
#endif /* defined(NV_DRM_AVAILABLE) */
#endif /* __NVIDIA_DRM_PRIV_H__ */

View File

@@ -0,0 +1,231 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm-conftest.h" /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#if defined(NV_DRM_DRMP_H_PRESENT)
#include <drm/drmP.h>
#endif
#if defined(NV_DRM_DRM_PLANE_H_PRESENT)
#include <drm/drm_plane.h>
#endif
#include <drm/drm_modes.h>
#include <uapi/drm/drm_fourcc.h>
#include "nvidia-drm-priv.h"
#include "nvidia-drm-utils.h"
struct NvKmsKapiConnectorInfo*
nvkms_get_connector_info(struct NvKmsKapiDevice *pDevice,
NvKmsKapiConnector hConnector)
{
struct NvKmsKapiConnectorInfo *connectorInfo =
nv_drm_calloc(1, sizeof(*connectorInfo));
if (connectorInfo == NULL) {
return ERR_PTR(-ENOMEM);
}
if (!nvKms->getConnectorInfo(pDevice, hConnector, connectorInfo)) {
nv_drm_free(connectorInfo);
return ERR_PTR(-EINVAL);
}
return connectorInfo;
}
int
nvkms_connector_signal_to_drm_encoder_signal(NvKmsConnectorSignalFormat format)
{
switch (format) {
default:
case NVKMS_CONNECTOR_SIGNAL_FORMAT_UNKNOWN:
return DRM_MODE_ENCODER_NONE;
case NVKMS_CONNECTOR_SIGNAL_FORMAT_TMDS:
case NVKMS_CONNECTOR_SIGNAL_FORMAT_DP:
return DRM_MODE_ENCODER_TMDS;
case NVKMS_CONNECTOR_SIGNAL_FORMAT_LVDS:
return DRM_MODE_ENCODER_LVDS;
case NVKMS_CONNECTOR_SIGNAL_FORMAT_VGA:
return DRM_MODE_ENCODER_DAC;
case NVKMS_CONNECTOR_SIGNAL_FORMAT_DSI:
return DRM_MODE_ENCODER_DSI;
}
}
int nvkms_connector_type_to_drm_connector_type(NvKmsConnectorType type,
NvBool internal)
{
switch (type) {
default:
case NVKMS_CONNECTOR_TYPE_UNKNOWN:
return DRM_MODE_CONNECTOR_Unknown;
case NVKMS_CONNECTOR_TYPE_DP:
return
internal ?
DRM_MODE_CONNECTOR_eDP : DRM_MODE_CONNECTOR_DisplayPort;
case NVKMS_CONNECTOR_TYPE_HDMI:
return DRM_MODE_CONNECTOR_HDMIA;
case NVKMS_CONNECTOR_TYPE_DVI_D:
return DRM_MODE_CONNECTOR_DVID;
case NVKMS_CONNECTOR_TYPE_DVI_I:
return DRM_MODE_CONNECTOR_DVII;
case NVKMS_CONNECTOR_TYPE_LVDS:
return DRM_MODE_CONNECTOR_LVDS;
case NVKMS_CONNECTOR_TYPE_VGA:
return DRM_MODE_CONNECTOR_VGA;
case NVKMS_CONNECTOR_TYPE_DSI:
return DRM_MODE_CONNECTOR_DSI;
case NVKMS_CONNECTOR_TYPE_DP_SERIALIZER:
return DRM_MODE_CONNECTOR_DisplayPort;
}
}
void
nvkms_display_mode_to_drm_mode(const struct NvKmsKapiDisplayMode *displayMode,
struct drm_display_mode *mode)
{
#if defined(NV_DRM_DISPLAY_MODE_HAS_VREFRESH)
mode->vrefresh = (displayMode->timings.refreshRate + 500) / 1000; /* In Hz */
#endif
mode->clock = (displayMode->timings.pixelClockHz + 500) / 1000; /* In Hz */
mode->hdisplay = displayMode->timings.hVisible;
mode->hsync_start = displayMode->timings.hSyncStart;
mode->hsync_end = displayMode->timings.hSyncEnd;
mode->htotal = displayMode->timings.hTotal;
mode->hskew = displayMode->timings.hSkew;
mode->vdisplay = displayMode->timings.vVisible;
mode->vsync_start = displayMode->timings.vSyncStart;
mode->vsync_end = displayMode->timings.vSyncEnd;
mode->vtotal = displayMode->timings.vTotal;
if (displayMode->timings.flags.interlaced) {
mode->flags |= DRM_MODE_FLAG_INTERLACE;
}
if (displayMode->timings.flags.doubleScan) {
mode->flags |= DRM_MODE_FLAG_DBLSCAN;
}
if (displayMode->timings.flags.hSyncPos) {
mode->flags |= DRM_MODE_FLAG_PHSYNC;
}
if (displayMode->timings.flags.hSyncNeg) {
mode->flags |= DRM_MODE_FLAG_NHSYNC;
}
if (displayMode->timings.flags.vSyncPos) {
mode->flags |= DRM_MODE_FLAG_PVSYNC;
}
if (displayMode->timings.flags.vSyncNeg) {
mode->flags |= DRM_MODE_FLAG_NVSYNC;
}
mode->width_mm = displayMode->timings.widthMM;
mode->height_mm = displayMode->timings.heightMM;
if (strlen(displayMode->name) != 0) {
memcpy(
mode->name, displayMode->name,
min(sizeof(mode->name), sizeof(displayMode->name)));
mode->name[sizeof(mode->name) - 1] = '\0';
} else {
drm_mode_set_name(mode);
}
}
void drm_mode_to_nvkms_display_mode(const struct drm_display_mode *src,
struct NvKmsKapiDisplayMode *dst)
{
#if defined(NV_DRM_DISPLAY_MODE_HAS_VREFRESH)
dst->timings.refreshRate = src->vrefresh * 1000;
#else
dst->timings.refreshRate = drm_mode_vrefresh(src) * 1000;
#endif
dst->timings.pixelClockHz = src->clock * 1000; /* In Hz */
dst->timings.hVisible = src->hdisplay;
dst->timings.hSyncStart = src->hsync_start;
dst->timings.hSyncEnd = src->hsync_end;
dst->timings.hTotal = src->htotal;
dst->timings.hSkew = src->hskew;
dst->timings.vVisible = src->vdisplay;
dst->timings.vSyncStart = src->vsync_start;
dst->timings.vSyncEnd = src->vsync_end;
dst->timings.vTotal = src->vtotal;
if (src->flags & DRM_MODE_FLAG_INTERLACE) {
dst->timings.flags.interlaced = NV_TRUE;
} else {
dst->timings.flags.interlaced = NV_FALSE;
}
if (src->flags & DRM_MODE_FLAG_DBLSCAN) {
dst->timings.flags.doubleScan = NV_TRUE;
} else {
dst->timings.flags.doubleScan = NV_FALSE;
}
if (src->flags & DRM_MODE_FLAG_PHSYNC) {
dst->timings.flags.hSyncPos = NV_TRUE;
} else {
dst->timings.flags.hSyncPos = NV_FALSE;
}
if (src->flags & DRM_MODE_FLAG_NHSYNC) {
dst->timings.flags.hSyncNeg = NV_TRUE;
} else {
dst->timings.flags.hSyncNeg = NV_FALSE;
}
if (src->flags & DRM_MODE_FLAG_PVSYNC) {
dst->timings.flags.vSyncPos = NV_TRUE;
} else {
dst->timings.flags.vSyncPos = NV_FALSE;
}
if (src->flags & DRM_MODE_FLAG_NVSYNC) {
dst->timings.flags.vSyncNeg = NV_TRUE;
} else {
dst->timings.flags.vSyncNeg = NV_FALSE;
}
dst->timings.widthMM = src->width_mm;
dst->timings.heightMM = src->height_mm;
memcpy(dst->name, src->name, min(sizeof(dst->name), sizeof(src->name)));
}
#endif

View File

@@ -0,0 +1,54 @@
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __NVIDIA_DRM_UTILS_H__
#define __NVIDIA_DRM_UTILS_H__
#include "nvidia-drm-conftest.h"
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
#include "nvkms-kapi.h"
enum drm_plane_type;
struct drm_display_mode;
struct NvKmsKapiConnectorInfo*
nvkms_get_connector_info(struct NvKmsKapiDevice *pDevice,
NvKmsKapiConnector hConnector);
int nvkms_connector_signal_to_drm_encoder_signal(
NvKmsConnectorSignalFormat format);
int nvkms_connector_type_to_drm_connector_type(NvKmsConnectorType type,
NvBool internal);
void nvkms_display_mode_to_drm_mode(
const struct NvKmsKapiDisplayMode *displayMode,
struct drm_display_mode *mode);
void drm_mode_to_nvkms_display_mode(const struct drm_display_mode *src,
struct NvKmsKapiDisplayMode *dst);
#endif /* NV_DRM_ATOMIC_MODESET_AVAILABLE */
#endif /* __NVIDIA_DRM_UTILS_H__ */

View File

@@ -0,0 +1,117 @@
###########################################################################
# Kbuild fragment for nvidia-drm.ko
###########################################################################
#
# Define NVIDIA_DRM_{SOURCES,OBJECTS}
#
NVIDIA_DRM_SOURCES =
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-drv.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-utils.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-crtc.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-encoder.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-connector.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-gem.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-fb.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-modeset.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-prime-fence.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-linux.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-helper.c
NVIDIA_DRM_SOURCES += nvidia-drm/nv-pci-table.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-gem-nvkms-memory.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-gem-user-memory.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-gem-dma-buf.c
NVIDIA_DRM_SOURCES += nvidia-drm/nvidia-drm-format.c
NVIDIA_DRM_OBJECTS = $(patsubst %.c,%.o,$(NVIDIA_DRM_SOURCES))
obj-m += nvidia-drm.o
nvidia-drm-y := $(NVIDIA_DRM_OBJECTS)
NVIDIA_DRM_KO = nvidia-drm/nvidia-drm.ko
NV_KERNEL_MODULE_TARGETS += $(NVIDIA_DRM_KO)
#
# Define nvidia-drm.ko-specific CFLAGS.
#
NVIDIA_DRM_CFLAGS += -I$(src)/nvidia-drm
NVIDIA_DRM_CFLAGS += -UDEBUG -U_DEBUG -DNDEBUG -DNV_BUILD_MODULE_INSTANCES=0
$(call ASSIGN_PER_OBJ_CFLAGS, $(NVIDIA_DRM_OBJECTS), $(NVIDIA_DRM_CFLAGS))
#
# Register the conftests needed by nvidia-drm.ko
#
NV_OBJECTS_DEPEND_ON_CONFTEST += $(NVIDIA_DRM_OBJECTS)
NV_CONFTEST_GENERIC_COMPILE_TESTS += drm_available
NV_CONFTEST_GENERIC_COMPILE_TESTS += drm_atomic_available
NV_CONFTEST_GENERIC_COMPILE_TESTS += is_export_symbol_gpl_refcount_inc
NV_CONFTEST_GENERIC_COMPILE_TESTS += is_export_symbol_gpl_refcount_dec_and_test
NV_CONFTEST_GENERIC_COMPILE_TESTS += drm_alpha_blending_available
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_dev_unref
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_reinit_primary_mode_group
NV_CONFTEST_FUNCTION_COMPILE_TESTS += get_user_pages_remote
NV_CONFTEST_FUNCTION_COMPILE_TESTS += get_user_pages
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_lookup
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_atomic_state_ref_counting
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_driver_has_gem_prime_res_obj
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_atomic_helper_connector_dpms
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_connector_funcs_have_mode_in_name
NV_CONFTEST_FUNCTION_COMPILE_TESTS += vmf_insert_pfn
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_framebuffer_get
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_get
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_dev_put
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_format_num_planes
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_connector_for_each_possible_encoder
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_rotation_available
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_vma_offset_exact_lookup_locked
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_put_unlocked
NV_CONFTEST_FUNCTION_COMPILE_TESTS += nvhost_dma_fence_unpack
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_bus_present
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_bus_has_bus_type
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_bus_has_get_irq
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_bus_has_get_name
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_has_device_list
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_has_legacy_dev_list
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_has_set_busid
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_state_has_connectors_changed
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_init_function_args
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_helper_mode_fill_fb_struct
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_master_drop_has_from_release_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_unload_has_int_return_type
NV_CONFTEST_TYPE_COMPILE_TESTS += vm_fault_has_address
NV_CONFTEST_TYPE_COMPILE_TESTS += vm_ops_fault_removed_vma_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_atomic_helper_crtc_destroy_state_has_crtc_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_atomic_helper_plane_destroy_state_has_plane_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_mode_object_find_has_file_priv_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += dma_buf_owner
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_connector_list_iter
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_atomic_helper_swap_state_has_stall_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_prime_flag_present
NV_CONFTEST_TYPE_COMPILE_TESTS += vm_fault_t
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_gem_object_has_resv
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_state_has_async_flip
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_state_has_pageflip_flags
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_format_modifiers_present
NV_CONFTEST_TYPE_COMPILE_TESTS += mm_has_mmap_lock
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_vma_node_is_allowed_has_tag_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_vma_offset_node_has_readonly
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_display_mode_has_vrefresh
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_master_set_has_int_return_type
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_has_gem_free_object
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_prime_pages_to_sg_has_drm_device_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_driver_has_gem_prime_callbacks
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_atomic_check_has_atomic_state_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_gem_object_vmap_has_map_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_plane_atomic_check_has_atomic_state_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_device_has_pdev
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_state_has_no_vblank
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_mode_config_has_allow_fb_modifiers

View File

@@ -0,0 +1,59 @@
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "nvidia-drm.h"
#if defined(NV_DRM_AVAILABLE)
#include "nvidia-drm-priv.h"
#include "nvidia-drm-drv.h"
static struct NvKmsKapiFunctionsTable nvKmsFuncsTable = {
.versionString = NV_VERSION_STRING,
};
const struct NvKmsKapiFunctionsTable* const nvKms = &nvKmsFuncsTable;
#endif
int nv_drm_init(void)
{
#if defined(NV_DRM_AVAILABLE)
if (!nvKmsKapiGetFunctionsTable(&nvKmsFuncsTable)) {
NV_DRM_LOG_ERR(
"Version mismatch: nvidia-modeset.ko(%s) nvidia-drm.ko(%s)",
nvKmsFuncsTable.versionString, NV_VERSION_STRING);
return -EINVAL;
}
return nv_drm_probe_devices();
#else
return 0;
#endif
}
void nv_drm_exit(void)
{
#if defined(NV_DRM_AVAILABLE)
nv_drm_remove_devices();
#endif
}

Some files were not shown because too many files have changed in this diff Show More