Fermion Path Integrals - Grassmann Valued Fields?

I've seen many people (inc. Xiao-Gang Wen) claim that grassmann numbers don't have any "physical meaning" - at least that they are aware of. What is the opinion of people on this board?

I want to take grassmann valued fields seriously as a physical concept, but I'm having a difficult time doing so. There are two main issues.

In the first place, Berezin integration has nothing to do with sums. Should we still think of a grassmann valued field as taking some stochastic sequence of configurations, each of which has some contribution to the observable we calculate? It seems not so, but then the entire notion of "path integration" seems only to apply to bosons - and most particles are not even bosons.

In the second place, it doesn't even seem like the concept of a "grassmann valued field" means anything in terms of quantum states. In the case of bosons, a field s(x) is normal (so [s(x), s*(x)] = 0; where s*(x) is the Hermitian conjugate of s(x)) and at a fixed time t = 0, [s(x),s(y)] = 0. Because of these two commutation rules, it is possible to simultaneously diagonalize the s(x)'s at each spacetime point and define a quantum state that is an eigenvector of each s(x). Because of this, field configurations are in one-to-one correspondence with a complete set of quantum states. For fermions, these commutation relations are replaced with anti-commutation relations. Similar results seem not to exist. The conclusion is that grassmann valued field configurations are NOT in one-to-one correspondence with fermionic quantum states. This makes interpretation even more difficult.

Thoughts?

Other urls found in this thread:

theory.caltech.edu/~kapustin/Ph205/2013/fall2.pdf
arxiv.org/abs/1305.4293.
terrytao.wordpress.com/2013/02/19/supercommutative-gaussian-integration-and-the-gaussian-unitary-ensemble/#i-ident
cambridge.org/core/books/functional-integration/2D7B67771B4AF069B0092164B1A0E2DB
twitter.com/NSFWRedditGif

The grassman values are fudge factors to force the theory to work mathematically when it doesnt work otherwise. There is no physical meaning to it other than a physicist reaching into his ass and pulling it out.

Seriously, fermions only work in a classical sense. But the holy commandment that is quantum theory shall not be disobeyed so physicists made reality obey it.

>In the second place, it doesn't even seem like the concept of a "grassmann valued field" means anything in terms of quantum states

You are correct. Quantum physics is not physics based, its math based.

>fudge factors
this I read something like this in one of Hawking's books

What does "fermions only work in a classical sense" mean? The bosonic commutator obeys the axioms of a Lie algebra, and so a lot of classical things that follow for Poisson brackets follow for Bosons. They don't follow for fermionic fields. I am not sure in what sense fermions make sense in a classical context.

When you say "there is no physical meaning to it...", you must appreciate the parallels between the fermionic and boson path integrals, at least symbolically. To assume that all means nothing, and is just a fudge factor, doesn't make much sense to me. That is my main motivation to pursue this. To use your analogy, it's like if a physicist reached into his ass, but instead of finding shit, found a well-cut diamond. It works too well for me to ignore it.

Your final point that quantum physics is not physics based make no sense. Let's redefine "physics based" in such a way that it agrees with whatever definition you are using. Almost all of physics is not physics based, and fermionic physics is even less physics based. Why? Should I just accept this?

Fermions obey classic field theory. When physicists were calculating qft for fermions, they ran into a problem. It didnt work, so they derived grassman numbers to build qft for it. Essentially.

I dont think Im going to be able to provide you a satisfactory answer. Especially if you are asking in the context of learning for a class.

Fermion and Boson path integrals are fun to play around with. But neither have been confirmed experimentally. Neither were experimentally derived. The parallels between them do not satisfy me as a justification when we still haven't fully observed anything subatomic.

What I mean by not physics based is that the observations from which we derive quantum is incomplete even though quantum provides a "good enough" model from time to time.

I'm still not sure what you mean by "Fermions obey classic[al] field theory". I could treat the Dirac equation as a PDE and solve it in a classical field-theoretic sense, but I would understand the result as a quantum probability distribution and not as a field. The funny thing is that if you forget about fields for a moment and introduce a finite number of quantum degrees of freedom (x1,x2,x2...) and treat them as you would in the usual quantum theory (sans field) with the exception that you add a factor of (-1) to the paths in one of the two homotopy classes for paths in R^{3}, you'd get the correct thing. So, the problem isn't quantum mechanics. It's the introduction of a field where the particle number is no longer fixed. I repeat: quantum mechanics is not at all the problem. Fermions can be easily described by quantum mechanics, as experiments in solid state physics show.

We've observed the anomalous magnetic moment of the muon. Is that not subatomic enough? I'm not sure what more justification you'd need. Quantum mechanics provides a "good enough" model every time. There is no exception.

>In the first place, Berezin integration has nothing to do with sums.
Why should they?
>Should we still think of a grassmann valued field as taking some stochastic sequence of configurations, each of which has some contribution to the observable we calculate?
Yes? Berezin integration is a linear functional from the algebra of Grassmann numbers to [math]\mathbb{C}[/math] so I don't see why not?
>It seems not so,
Why?
>but then the entire notion of "path integration" seems only to apply to bosons
No? It applies to any particle with any spin statistics given sufficient regularity conditions (i.e. Swieca-Strozzi type or when there's some non-trivial topology/geometry to exploit, like in TQFTs or CFTs).
>and most particles are not even bosons.
In dimension larger than two there are only bosons and fermions.
>In the second place, it doesn't even seem like the concept of a "grassmann valued field" means anything in terms of quantum states.
Except it does? They're representations of the generators of an infinite dimensional Clifford algebra.
>In the case of bosons, a field s(x) is normal (so [s(x), s*(x)] = 0; where s*(x) is the Hermitian conjugate of s(x)) and at a fixed time t = 0, [s(x),s(y)] = 0.
That's not what a boson is at all. Bosons satisfy the Heisenberg algebra [math][\psi^\dagger(x),\psi(y)]_{x^0 = y^0} = \delta({\bf x}-\bf{y}) \neq 0[/math]. In general bosons are operator-valued distributions, and so are fermions.
>Because of these two commutation rules, it is possible to simultaneously diagonalize the s(x)'s at each spacetime point
That's not true at all. There'd be no off-diagonal S-matrix elements otherwise.

>and define a quantum state that is an eigenvector of each s(x).
No, the Fock states are eigenstates of the position and momentum operators, not of the fields themselves.
>Because of this, field configurations are in one-to-one correspondence with a complete set of quantum states.
No they're not. Do you know what instantons are?
>For fermions, these commutation relations are replaced with anti-commutation relations. Similar results seem not to exist.
Why would you want to get similar results as bosons for fermions?
>The conclusion is that grassmann valued field configurations are NOT in one-to-one correspondence with fermionic quantum states.
That doesn't follow at all.
>This makes interpretation even more difficult.
By your own misunderstanding of the entire matter it appears.

You might have been able to be helpful. Unfortunately, I think an anxiety-induced existential crisis may have led you to be too thirsty in your desire to call someone other than yourself a moron.

In many placed, you've completely missed my point and given some irrelevant definition. In particular, where you mention the Heisenberg algebra, I was referring to the field strength operator, not the creation/annihilation operator. Please see any introductory QFT text to see that I am correct. Here is a helpful reference (which you check for yourself, since I'm sure you are not busy with any important research): theory.caltech.edu/~kapustin/Ph205/2013/fall2.pdf

See the top of page 2. A lot of your misunderstanding stems from this. It's such a amateurish mistake. I feel some second-hand embarrassment.

I might be able to get something out of this, though. What is the definition of path integration you use in axiomatic field theory?

>Unfortunately, I think an anxiety-induced existential crisis may have led you to be too thirsty in your desire to call someone other than yourself a moron.
Good psychological projection, though I wouldn't be surprised if you don't know the definition of a projection in the mathematical sense either at this point.
>field strength operator
None of those things are "field strength" operators, they're the canonical field coordinates; no where does the notes you've linked mention any "field strengths" at all. Conventionally what people in the know (i.e. not you) call the field strength is the curvature of the connection 1-form of a gauge potential, which has nothing to do with the quantization of the canonical field coordinates [math]\{\phi_a,\pi_a\}[/math] for a free non-gauge scalar field (which is what you've linked).
>not the creation/annihilation operator
In the free field theory, the canonical field coordinates can be decomposed into plane-wave modes the coefficients of which generate the Heisenberg algebra. In fact Wick's theorem as the entirety of perturbative QFT relies on this assumption (i.e. the existence of an interaction picture).
>A lot of your misunderstanding stems from this.
Another projection. You sure like pulling shit out of your ass don't you?
>What is the definition of path integration you use in axiomatic field theory?
Undergrad explanation: it's the sum over all paths after inserting multiple resolutions of identities made of position/momentum eigenstates into the Green function (propagator) [math]\langle x;t_0| e^{-iH (t-t_0)}|y,t\rangle[/math].
Graduate explanation: it's the integration over the moduli space of connections.
Please go read Weinberg before you embarrass yourself again. The Dunning-Kruger is overwhelming.

>None of those things are "field strength" operators, they're the canonical field coordinates; no where does the notes you've linked mention any "field strengths" at all. Conventionally what people in the know (i.e. not you) call the field strength is the curvature of the connection 1-form of a gauge potential, which has nothing to do with the quantization of the canonical field coordinates {ϕa,πa}{ϕa,πa} for a free non-gauge scalar field (which is what you've linked).

I shouldn't have said field strength. You're right - that was wrong of me.

I meant the canonical field coordinates. You said before that "Fock states are eigenstates of the position and momentum operators, not of the fields themselves". Well, the position operator here is the canonical field coordinate. The quantum states, as I said before, are just eigenstates of those operators. I think it was just a simple misunderstanding, because you still thought I was referring to the "coefficients which generate the Heisenberg algebra", when I wasn't.

So, now that we've got our definitions straight (it was my fault for not getting them straight in the first place - sorry! Like you said, I was just a bit frustrated and was projecting), maybe I can ask a better question. Again, I'm a moron. Also, I am having a crisis and am very unstable (close to suicide). Please fulfill a dying man's last request.

Fermionic degrees of freedom also have canonical field coordinates, or "position operators" for every point in space time (at least, this is how we talk about them in my middle school class - I am an adult stuck in middle school trying to learn things I shouldn't be trying to learn). I'd like to talk about these. Can those operators be simultaneously diagonalized? That's one of my main concerns. Please make fun of me all your want, but if you do please just answer that question!

In my horrible state university where there are no good research professors (my university doubles as a middle school for kids in the next, richer city over), they teach bosonic path integrals in the following way.

You can make multiple resolutions of identities made of position/momentum eigenstates just like you said. That means you necessarily sum over states that simultaneously diagonalize the position operator at every point in spacetime. This, I figure, is only possible because of the boson commutation relations and some details about the spectral theorem. In other words, that procedure doesn't work for Fermions and that bothers me.

Does that make sense?

Also, by the way, because this is really bothering me, you seemed to imply that instantons mean that the statement "field configurations are in one-to-one correspondence with a complete set of quantum states" is false. I don't really follow that. I always think of instantons as a sequence of path configurations that minimizes the euclidean action. Does that mean you could find eigenstates of the position operators that correspond to the instanton configuration at any time?

I have to go to bed. I'd really appreciate any help you can give me. I'm not sure why you started this whole thing off being so grumpy, but I'm sorry my post offended you so much.

Wish you the best.

>Well, the position operator here is the canonical field coordinate.
No. There's a difference between position operator [math]\hat{q}[/math] (which defines Fock vectors [math]|\{q_{a_i}\}\rangle \equiv \prod_i q_{a_i} |0\rangle[/math] and canonical position coordinate [math]\phi_a[/math]. This is why you need to understand the terms first before forming your ideas.
>The quantum states, as I said before, are just eigenstates of those operators.
No. The quantum states are quite literally whatever I say they are, though which ones I chose depends on what's the most useful to the situation at hand. For instance the ease of computation of the Fock states may prompt particle physicists to use the Fock space [math]H \equiv \operatorname{Span}\{|0\rangle, |q_{a_1}\rangle, |q_{a_1},q_{a_2}\rangle\ ,\dots,|q_{a_1},q_{a_2},\dots q_{a_n}\rangle, \dots\}[/math] as the Hilbert space of states, while condensed matter physicists might use the Hilbert space of Bloch electrons.
>Can those operators be simultaneously diagonalized?
No, that wouldn't generally happen, and neither could bosonic fields except at space-like separated points due to the microcausality axiom. Again, fermionic fields form generators of an infinite dimensional Clifford algebra, which has specific matrix representations in which all the basic group theoretic results such as Schur's lemma and Casimir elements apply just the same.
>In other words, that procedure doesn't work for Fermions and that bothers me.
It does work, because the resolutions of identity is a statement on the Hilbert space [math]H[/math], not the representation of bounded linear operators [math]H^*[/math] (of trace class, part of the local net of polynomial operator algebras, etc.). They exist and work all the same whether you have a bosonic or fermion representation.

I always think of instantons as a sequence of path configurations that minimizes the euclidean action
What you've described are stationary phases of the path integral, which you can use to evaluate (a very special class) of Feynman path integrals. Instantons are very special types of these that aren't [math]L_\text{loc}^\infty[/math], namely they don't fall off to 0 at spatial infinity. This means that their asymptotics cannot be captured by stationary phase approximations and are characterized instead by non-trivial topology of the model, either that of the base manifold [math]M[/math] or that of the gauge principal [math]G[/math]-bundle. This is why instantons often show up as anomalies in topologically non-trivial quantum field theories.
The point I was trying to make before was that these instantons are zero energy stationary states, which means that they form a subspace in your Hilbert space which is invariant under isometries (e,g, Lorentz transformations) of your field operators. Meaning that you can generate these states from the same field operator by an action of those isometries, so you get more quantum states than field operators.
> I'm not sure why you started this whole thing off being so grumpy
Because I have to deal with cranks on a biweekly basis and the way you've structured your questions was reminiscent of how a crank might.

I can't sleep, mainly because of this. I think most of the issues are that I am not as precise enough. Sorry for any confusion that arises because of that.

A few things to clear up.

>No. There's a difference between position operator q^q^ (which defines Fock vectors |{qai}⟩≡∏iqai|0⟩|{qai}⟩≡∏iqai|0⟩ and canonical position coordinate ϕaϕa. This is why you need to understand the terms first before forming your ideas.

I usually think of this as follows. Please let me know where I diverge from you in an essential way. Fock vectors are defined as raising operators acting on the vacuum. Can you please tell me the relationship between your position operator and those raising/lowering operators? Also, the relationship between the canonical position coordinate and those position operators? This comment is the first time I've really felt I wasn't on the same page as you.

>No, that wouldn't generally happen, and neither could bosonic fields except at space-like separated points due to the microcausality axiom. Again, fermionic fields form generators of an infinite dimensional Clifford algebra, which has specific matrix representations in which all the basic group theoretic results such as Schur's lemma and Casimir elements apply just the same.

Sorry - I was unclear again. Not only that, but actually what I wrote was flat out wrong. Again I wasn't being careful (since you came to Veeky Forums, you have expected to done some babysitting - I am a clumsy baby). Let's define canonical position coordinates at every point in spacetime. Now lets group those operators together at each time (defining some reference frame). Because of the micocausality axiom, all of these operators should commute - at least, they should if they are bosonic. What happens for fermions? The eigenvalues of the fermionic canonical position coordinates are grassmann numbers, right? It seems that those fermionic operators should anti-commute.

Of course, I only meant that operators in each group commute.

The problem is really simple. It's not even a problem, it's just my own inability to stomach something - there is no mathematical/physical contradiction I can present to you. It is easy for me to think about field configurations for bosons. The field configurations are eigenstates of the canonical position coordinates at some time.

How do you think about field configurations for fermions? The canonical positions coordinates for fermions anti-commute. You cannot group the canonical position coordinates for fermions into sets of operators evaluated at equal times and simultaneously diagonalize those operators like you did for bosons. So, I'm at a loss.

The main point (and please, for the love of god, expand on this as much as you can bear to): How do you think about/define a field configuration for fermions? Are they grassmann valued fields? Does it make sense to sum over grassmann valued fields like it makes sense to sum over complex number valued fields?

>The field configurations are eigenstates of the canonical position coordinates at some time.

Maybe that was too clumsy again. Field configurations are functions from spacetime to the complex numbers. At each spacetime point, you associate the eigenvalue of the canonical position operator. My only point is that for each field configuration, there exists a state in the Hilbert space corresponding to that field configuration. There is only an injection into the Hilbert space. I can't believe before I said they were in one-to-one correspondence... That is so stupid, I can now see why you thought I was a complete moron.

>Fock vectors are defined as raising operators acting on the vacuum
Fock vectors are states, they are not operators. Given a vacuum, ladder operators can define Fock spaces by creation and annihilation of excited states. In much the same way the position operators create Fock states [math]|q\rangle[/math] with eigenvalue [math]\hat{q}|q\rangle = q|q\rangle[/math].
>Also, the relationship between the canonical position coordinate and those position operators?
One defines the Fock space and the other defines the canonical formalism. They're vastly different approaches to ultimately the same problem; in fact it's nothing short of a miracle that this is the case (viz. what Weinberg says at the end of Chapter 7).
>Because of the micocausality axiom, all of these operators should commute
At spacelike-separated points.
>at least, they should if they are bosonic.
For fermionic operators as well. The "commutation" statements coming from quantum states stay the same while the "Grassman" statements come from representations of these quantum states in the coordinate basis.
>How do you think about field configurations for fermions?
It's just another representation we can choose for the quantum states in the coordinate basis. The spin group representation axiom of QFT allows for both bosonic and fermionic representations to be defined, and this is exactly what allows you to prove the spin-statistics theorem.
>and simultaneously diagonalize those operators like you did for bosons. So, I'm at a loss.
Why would you expect to do the same thing for fermions as you did for bosons?
>Are they grassmann valued fields?
In the canonical fermionic representation yes.
>Does it make sense to sum over grassmann valued fields like it makes sense to sum over complex number valued fields?
Yes. If you understand summation of Grassmann numbers you should equally well understand summation of Grassmann fields.

>My only point is that for each field configuration, there exists a state in the Hilbert space corresponding to that field configuration.
This is not strictly true, and neither is the converse. The asymptotic states [math]\Psi_\text{in/out}[/math] may not have a [math]L^2[/math] representation and an instanton (a single quantum state) gives rise to a number of distinct field configurations equal to the topological charge of the model. The former leads to the divergence of the S-matrix and is an independent problem. The asymptotic completeness axiom can sweep this problem under the rug but its consistency with the rest of the Wightman/Osterwalder-Schrarer axioms is still tenuous at best.

>Fock vectors are states, they are not operators.

Again, my issue with wording. I meant that Fock vectors are those states you get after operating on the vacuum with raising/lowering operators. We are in agreement here.

>One defines the Fock space and the other defines the canonical formalism. They're vastly different approaches to ultimately the same problem; in fact it's nothing short of a miracle that this is the case (viz. what Weinberg says at the end of Chapter 7).

I will definitely read this (or attempt to).

>At spacelike-separated points.

Right - because I defined the sets of operators to be at the same time, the are all spacelike-seperated.

>For fermionic operators as well. The "commutation" statements coming from quantum states stay the same while the "Grassman" statements come from representations of these quantum states in the coordinate basis.
Understood.

>Yes. If you understand summation of Grassmann numbers you should equally well understand summation of Grassmann fields.

I do not understand summation of Grassmann numbers. Could you elaborate? What does summation of Grassmann numbers mean to you? Does it have any relationship with the Berezin integral?

>Why would you expect to do the same thing for fermions as you did for bosons?

I don't expect the same thing, and that is actually the whole problem. I've come to realize that my whole understanding of path integration is too closely tied to bosons.

>i_co_ming_offise_today_aaaaaaaa.jpg

>What does summation of Grassmann numbers mean to you?
Same as the action of any Aeblian group on a free module.
>Does it have any relationship with the Berezin integral?
Not directly. As far as I know Berezin integration is just a linear functional that satisfies the Grassmann analogue of FTC; though this fact should be enough to convince you that this is indeed what an actual "integration over Grassmann fields" (in whatever sense you want) must do. Similar to how the equivariant fibre integration is just a linear functional defined on the de Rham differential algebra [math]G[/math]-principal bundle, but it leads to the Atiyah-Bott formula which in turn reduces to the stationary phase approximation in classical mechanics (formulated in terms of Riemann integrals).
>I've come to realize that my whole understanding of path integration is too closely tied to bosons.
Once you've passed to the path integral formalism the quantum data (where the canonical commutation relations come from) of your fields decouple from its functional data (where the bosonic/fermionic distinction comes from). All of the quantum data has been shoved into the path integral measure and the limits, so if you're just doing path integrals you wouldn't even need to worry about the CCRs.

>Same as the action of any Aeblian group on a free module.

After looking up "free module", I agree. No issues here, then.

>Not directly. As far as I know Berezin integration is just a linear functional that satisfies the Grassmann analogue of FTC; though this fact should be enough to convince you that this is indeed what an actual "integration over Grassmann fields" (in whatever sense you want) must do. Similar to how the equivariant fibre integration is just a linear functional defined on the de Rham differential algebra GG-principal bundle, but it leads to the Atiyah-Bott formula which in turn reduces to the stationary phase approximation in classical mechanics (formulated in terms of Riemann integrals).

I think this is exactly the sort of thing I want. Firstly, what does FTC stand for? I can't find it online and don't want to keep that avenue unexplored (please don't say Federal Trade Comission).

I want a good reason why Berezin integration makes sense as a definition for an integral (I hope that makes sense), and this seems to be exactly it! You even say that it reduces to the appropriate thing in classical mechanics, which really makes me excited.

I will have to do a lot of reading to digest this, but I really thing this hits the mark. After seeing so many horrible explanations, this is the first time I feel genuine hope of understanding this. So, really, thank you for the input.

Let me end by asking for one last thing. Is there any good reference explaining the relationships between Atiyah-Bott formula and how it reduces to the stationary phase approximation in classical mechanics? A good reference for this "equivariant fibre interation" in general?

Seriously, I appreciate this so much. Maybe because I an deprived of sleep, but this is wonderful. I will go to sleep after seeing your reply, I don't think I need anything else. Thank you! I was seriously desperate.

>FTC
Fundamental Theorem of Calculus. The reason I say that this should convince you as to why Berezin integration is really an integral is because FTC-like or Stokes-like theorems are basically continuous versions of telescoping sums. They really are implicitly summing things.
>Is there any good reference explaining the relationships between Atiyah-Bott formula and how it reduces to the stationary phase approximation in classical mechanics?
Start here arxiv.org/abs/1305.4293.

Perfect. I am at peace (at least for) now. Thank you so much, I really appreciate it.

Hey, I believe you recommended a book to me awhile ago (Gauge Fields, Knots and Gravity by Baez). Just wanted to thank you, the book was/is fantastic.

Also I have an aversion to Grassmnan numbers. Is there any good reading or intuition for what's going on with a Grassman variable other than just a math definition?

Also, is it possible to use some other representation of Grassman variables, like a matrix form? Why keep them as numbers?

Holy frig dude have mercy

>Is there any good reading or intuition for what's going on with a Grassman variable other than just a math definition?
Your (along with the OP's) problem is that you think all numbers should commute, when there exist associative unital rings (which are our current models of "numbers") that don't. What you need is not gaining intuition for working with Grassmann numbers, but discarding your previous intuition about how numbers work.
>Also, is it possible to use some other representation of Grassman variables, like a matrix form?
Sure. As I've said before they form Clifford algebras which has matrix representations.
>Why keep them as numbers?
Because it's easy (and in a sense the minimal model) to work with. The machinery afforded by Clifford algebras is much more powerful than what's needed for Grassmann variables so why kill a fly with a hammer?

>Your (along with the OP's) problem is that you think all numbers should commute

My problem was exactly the opposite. I really want to take Grassmann numbers seriously. The problem is that people write things like "The reader may ask, what do you mean by 'use Grassmann numbers to represent the fermion operators'? Well, I have to say I do not know. I do not know the physical meaning of a Grassmann number". They also say things like "Grassmann numbers are just a formal trick". It makes every definition that then follows (Berezin integration, differentiation of Grassmann numbers, "Taylor expanding" Grassmann valued functions) look completely artificial. When you are done with defining the Fermionic path integral, you are left wondering: "Where in the previous argument did we sum over histories of Grassmann fields?" It isn't clear, at least not to me.

What helps are two things. First of all, you posted that paper about Equivariant Cohomology. I can't read it yet, but I intent to once I learn the mathematical background. Second of all, there is this discussion: terrytao.wordpress.com/2013/02/19/supercommutative-gaussian-integration-and-the-gaussian-unitary-ensemble/#i-ident

This makes it at least seem like maybe you are summing over histories of Grassmann valued fields; the usual definitions are just clever ways of abstracting the notion of "integration" that doesn't depend on analytic properties.

When Grassmann numbers and the integration functional are properly defined AND motivated, there are hardly any issues.

>what do you mean by 'use Grassmann numbers to represent the fermion operators?
Have you been listening at all? Grassmann functions are representations of fermionic wavefunctions in the coordinate basis.
>It makes every definition that then follows (Berezin integration, differentiation of Grassmann numbers, "Taylor expanding" Grassmann valued functions) look completely artificial.
Ask yourself "why are c-number functions not artificial?" The way those numbers work is also artificially constructed to suit our own understanding of the world.
>When Grassmann numbers and the integration functional are properly defined AND motivated, there are hardly any issues.
They are, and you should keep your issues to yourself before you've understood them.
Also you'll have to excuse me for the late replies. I'm currently organizing a workshop on topological field theory at California.

>Have you been listening at all? Grassmann functions are representations of fermionic wavefunctions in the coordinate basis.

Actually, I have been listening. I gave that quote as an example of something that is misleading (I thought I made that obvious? I guess not). In fact, those are not my words. That is a direct quote from "Quantum Field Theory of Many-Body Systems" by Xiao-Gang Wen. Maybe if he's at your conference on topological field theory at California you can challenge him on that.

>Ask yourself "why are c-number functions not artificial?" The way those numbers work is also artificially constructed to suit our own understanding of the world.
I agree. I am not arguing Grassmann numbers are artificial (I have said many times that I want to take them seriously). I am just saying that they seem artificial (seem != are), and without an aside like the one in Tao's website or that you gave, I think discussions about them in textbooks are incomplete.

>They are,
Yep, that's what I said!
>and you should keep your issues to yourself before you've understood them.
Haha, but I kept this issue to myself I never would have been led to actually understand them!

Actually, I'm not sure if I understand them. Let me ask you a quick question. Before, you said the Berezin integration was Stokes-like or FTC-like. I think I know what that means, but I should probably keep my issues to myself before I'm sure I've understood them. So, could you elaborate a bit? Be a little more specific?

>I think discussions about them in textbooks are incomplete

I should be more specific here. Discussions about *integrating them* are incomplete.

Btfo

motls blogspot co uk/2011/11/celebrating-grassmann-numbers.html

Read this OP, Lubosh explains why Grassmann numbers are just a piece of maths used to think about spinors the same way we think about scalars and vectors, as classical field that give rise to quantum fields when path integrated over.

Some of this is helpful, but there are parts of it that seem badly wrong. Here's an excerpt.

> We've seen that the probabilities are computed from probability amplitudes and/or integrals over histories of Grassmann variables. The latter don't even exist as elements of a well-defined set so they obviously can't be composed out of (or functions of) anything that is "real".

Am I reading this correctly?: Grassmann variables don't exist as elements of a well-defined set? It's clear that isn't true, right? What do you think?

Statements like these make Grassmann numbers seem too mysterious for my taste, and is (or rather, was) my problem with them.

cambridge.org/core/books/functional-integration/2D7B67771B4AF069B0092164B1A0E2DB

Chap 9 is not a bad discussion. Points out some good abstractions